Skip to content

Fix: SQLite Database Is Locked Error

FixDevs ·

Quick Answer

How to fix the SQLite 'database is locked' error caused by concurrent writes, long-running transactions, missing WAL mode, busy timeout, and unclosed connections.

The Error

You run a query against your SQLite database, and you get:

sqlite3.OperationalError: database is locked

In Node.js, it looks like:

Error: SQLITE_BUSY: database is locked

In Go:

database/sql: database is locked (5)

Or from the SQLite CLI:

Error: database is locked

The operation hangs for a while, then fails. Sometimes it works when you retry, sometimes it doesn’t. The error appears random, but it has a clear cause.

Why This Happens

SQLite uses file-level locking. Unlike PostgreSQL or MySQL, there is no server process managing concurrent access. The database is a single file on disk, and SQLite uses OS-level file locks to coordinate reads and writes.

Here is what triggers the lock:

  1. Concurrent writes: SQLite allows multiple simultaneous readers, but only one writer at a time. When a second connection tries to write while the first is still writing, it gets locked out.

  2. Long-running transactions: A transaction that stays open for too long holds a lock the entire time. Every other write operation waits — and eventually times out.

  3. Unclosed connections: A connection that was never properly closed can hold a lock indefinitely. This is common in scripts that crash mid-execution or in web frameworks with poor connection lifecycle management.

  4. Default journal mode (DELETE): The default journal mode forces readers to wait for writers and vice versa. This creates unnecessary contention.

  5. No busy timeout: By default, SQLite returns SQLITE_BUSY immediately when it encounters a lock. It doesn’t wait at all. A zero-second timeout means even brief lock overlaps cause failures.

  6. NFS or network filesystems: SQLite’s file locking doesn’t work reliably on network-mounted drives. If your database file sits on NFS, SMB, or a Docker volume backed by a network share, locking breaks silently.

Understanding these causes points directly to the fixes below.

Fix 1: Enable WAL Mode

This is the single most impactful change you can make. WAL (Write-Ahead Logging) mode replaces SQLite’s default rollback journal with a write-ahead log. The result: readers no longer block writers, and writers no longer block readers.

In the default DELETE journal mode, a write transaction locks the entire database file. In WAL mode, writes go to a separate log file while readers continue reading from the main database.

Enable it by running:

PRAGMA journal_mode=WAL;

In Python:

import sqlite3

conn = sqlite3.connect("mydb.sqlite3")
conn.execute("PRAGMA journal_mode=WAL;")

In Node.js with better-sqlite3:

const Database = require('better-sqlite3');
const db = new Database('mydb.sqlite3');
db.pragma('journal_mode = WAL');

In Django, add it to your DATABASES config:

DATABASES = {
    'default': {
        'ENGINE': 'django.db.backends.sqlite3',
        'NAME': BASE_DIR / 'db.sqlite3',
        'OPTIONS': {
            'init_command': 'PRAGMA journal_mode=WAL;',
        },
    }
}

Note: WAL mode is persistent. You set it once and it stays until you explicitly change it back. You can verify the current mode with PRAGMA journal_mode;.

WAL mode does have trade-offs. Read performance is slightly lower for very large transactions because readers must check the WAL file. And the WAL file can grow large if a long-running read transaction prevents checkpointing. But for the vast majority of use cases, WAL mode is strictly better.

Pro Tip: Starting with SQLite 3.34.0 (2020-12-01), you can use PRAGMA journal_mode=WAL2; in some builds for even better write concurrency. However, standard SQLite distributions don’t include WAL2 — it’s available in the begin-concurrent branch. Stick with regular WAL unless you’re building SQLite from source.

Fix 2: Set a Busy Timeout

By default, SQLite returns SQLITE_BUSY the instant it encounters a lock. It doesn’t retry, it doesn’t wait — it just fails. Setting a busy timeout tells SQLite to wait a specified number of milliseconds before giving up.

In Python:

conn = sqlite3.connect("mydb.sqlite3")
conn.execute("PRAGMA busy_timeout = 5000;")  # Wait up to 5 seconds

Or use the timeout parameter directly:

conn = sqlite3.connect("mydb.sqlite3", timeout=10)  # 10 seconds

In Node.js with better-sqlite3:

const db = new Database('mydb.sqlite3', { timeout: 5000 });

In Go with mattn/go-sqlite3:

db, err := sql.Open("sqlite3", "mydb.sqlite3?_busy_timeout=5000")

In the SQLite CLI:

.timeout 5000

A timeout of 5000ms (5 seconds) works well for most applications. For high-write workloads, increase it to 10–30 seconds. If you’re still hitting timeouts after 30 seconds, the underlying problem is something else — likely a long-running transaction or an unclosed connection.

Note: The busy timeout uses a back-off retry strategy. SQLite doesn’t spin-lock for the full duration. It sleeps for short intervals between retry attempts, gradually increasing the sleep time.

Fix 3: Close Connections Properly

An unclosed connection can hold a lock forever. This is one of the most common causes of persistent “database is locked” errors, especially in Python scripts and web applications.

Always use context managers or try/finally blocks:

Python — use a context manager:

import sqlite3

# Connection is guaranteed to close even if an exception occurs
with sqlite3.connect("mydb.sqlite3") as conn:
    conn.execute("INSERT INTO users (name) VALUES (?)", ("Alice",))
    conn.commit()

Node.js — close on exit:

const db = new Database('mydb.sqlite3');

process.on('exit', () => db.close());
process.on('SIGINT', () => { db.close(); process.exit(); });
process.on('SIGTERM', () => { db.close(); process.exit(); });

// Or in an Express app, close on server shutdown

Python — watch out for this mistake:

# BAD: Connection stays open if an exception occurs between connect and close
conn = sqlite3.connect("mydb.sqlite3")
conn.execute("INSERT INTO users (name) VALUES (?)", ("Alice",))
conn.commit()
conn.close()  # Never reached if execute() throws

Common Mistake: In Python, the with sqlite3.connect(...) context manager commits or rolls back the transaction when the block exits, but it does not close the connection. You need an explicit conn.close() after the with block, or use contextlib.closing():

from contextlib import closing

with closing(sqlite3.connect("mydb.sqlite3")) as conn:
    with conn:  # This handles commit/rollback
        conn.execute("INSERT INTO users (name) VALUES (?)", ("Alice",))

Check for orphaned connections by looking at the -wal and -shm files. If these files exist when no process should be accessing the database, something didn’t close properly.

Fix 4: Keep Transactions Short

Long-running transactions are a primary cause of lock contention. A write transaction that stays open for seconds — or minutes — blocks every other write for its entire duration.

Bad — transaction open during slow I/O:

conn = sqlite3.connect("mydb.sqlite3")
conn.execute("BEGIN")
for url in urls:
    data = requests.get(url).json()  # Network I/O inside a transaction
    conn.execute("INSERT INTO results (data) VALUES (?)", (json.dumps(data),))
conn.commit()

Good — fetch first, then write in a quick batch:

conn = sqlite3.connect("mydb.sqlite3")

# Do slow work outside the transaction
results = []
for url in urls:
    results.append(requests.get(url).json())

# Write everything in one fast transaction
conn.execute("BEGIN")
for data in results:
    conn.execute("INSERT INTO results (data) VALUES (?)", (json.dumps(data),))
conn.commit()

The same principle applies to any slow operation: file I/O, API calls, user input, or complex computations. Do the slow work outside the transaction, then write the results in a single, fast batch.

For bulk inserts, use executemany() — it’s faster than individual execute() calls and keeps the transaction shorter:

conn.executemany(
    "INSERT INTO results (data) VALUES (?)",
    [(json.dumps(d),) for d in results]
)
conn.commit()

Fix 5: Use Connection Pooling Correctly

Connection pooling can either help or hurt, depending on how you configure it. The key rule for SQLite: use a single writer connection and multiple reader connections.

In Python with SQLAlchemy:

from sqlalchemy import create_engine

# Use StaticPool for single-threaded apps
engine = create_engine(
    "sqlite:///mydb.sqlite3",
    connect_args={"check_same_thread": False},
    pool_size=1,       # Only one connection for writes
    max_overflow=0,
)

For multi-threaded applications, the pattern is a single write connection shared across threads (serialized with a mutex) and a pool of read-only connections:

import threading
import sqlite3

class SQLitePool:
    def __init__(self, db_path):
        self.db_path = db_path
        self.write_lock = threading.Lock()

    def write(self, query, params=()):
        with self.write_lock:
            conn = sqlite3.connect(self.db_path)
            conn.execute("PRAGMA journal_mode=WAL;")
            conn.execute("PRAGMA busy_timeout=5000;")
            try:
                conn.execute(query, params)
                conn.commit()
            finally:
                conn.close()

    def read(self, query, params=()):
        conn = sqlite3.connect(self.db_path, uri=True)
        conn.execute("PRAGMA journal_mode=WAL;")
        try:
            return conn.execute(query, params).fetchall()
        finally:
            conn.close()

Why this matters: SQLite handles multiple readers gracefully, but concurrent writers always contend for the same lock. By funneling all writes through a single connection (or serializing them with a lock), you eliminate write contention entirely. Reads can happen in parallel without issue, especially in WAL mode.

In frameworks like Django and Flask, SQLite pooling is often handled automatically. But if you’re managing connections manually, this pattern prevents the “database is locked” error in multi-threaded environments.

Fix 6: Use Read-Only Connections for Reads

If a connection only needs to read data, open it in read-only mode. This prevents accidental writes and avoids acquiring write locks unnecessarily.

In Python:

conn = sqlite3.connect("file:mydb.sqlite3?mode=ro", uri=True)
cursor = conn.execute("SELECT * FROM users")
rows = cursor.fetchall()
conn.close()

In Node.js with better-sqlite3:

const db = new Database('mydb.sqlite3', { readonly: true });

In Go:

db, err := sql.Open("sqlite3", "file:mydb.sqlite3?mode=ro")

Read-only connections have two benefits:

  1. They never acquire a RESERVED or EXCLUSIVE lock, so they can’t block writers.
  2. They fail fast if code accidentally tries to write, which catches bugs early.

In WAL mode, read-only connections can operate completely independently of write operations. This is the ideal setup for web applications where most requests are reads.

Fix 7: Check for External Processes Holding Locks

Sometimes the lock isn’t caused by your code — another process has the database open. This is common when:

  • A database browser (DB Browser for SQLite, DBeaver) has the file open
  • A crashed process left a stale lock
  • A cron job or background worker is running concurrent writes
  • An IDE plugin is auto-inspecting the database file

Find processes using the database file:

Linux/macOS:

fuser mydb.sqlite3
lsof mydb.sqlite3

Windows:

handle.exe mydb.sqlite3

If you find a stale lock from a crashed process, check for leftover journal files:

ls -la mydb.sqlite3*

You might see:

mydb.sqlite3
mydb.sqlite3-journal   # DELETE mode journal
mydb.sqlite3-wal       # WAL mode log
mydb.sqlite3-shm       # WAL mode shared memory

If no process is using the database and a -journal file exists, SQLite will automatically recover it on the next connection. If recovery fails, you can try:

sqlite3 mydb.sqlite3 "PRAGMA integrity_check;"

Warning: Never manually delete -wal or -shm files while any process has the database open. This corrupts the database. Only remove them if you’re certain no process is connected and integrity_check passes.

Fix 8: Avoid SQLite on Network Filesystems

SQLite depends on OS-level file locking for correctness. Network filesystems (NFS, SMB/CIFS, SSHFS) often implement file locking incorrectly or not at all. The result: silent data corruption, phantom “database is locked” errors, or both.

Common scenarios where this bites you:

  • Docker volumes backed by network mounts
  • NFS home directories on shared servers
  • WSL2 accessing files on the Windows filesystem (/mnt/c/...) — this works but has performance and locking quirks
  • Cloud-synced folders (Dropbox, Google Drive, OneDrive) — sync engines modify the file externally and break locks

If you must use SQLite on a network path, consider these workarounds:

  1. Copy the database locally, work with it, then copy it back:
cp /nfs/shared/mydb.sqlite3 /tmp/mydb.sqlite3
# Do your work on /tmp/mydb.sqlite3
cp /tmp/mydb.sqlite3 /nfs/shared/mydb.sqlite3
  1. Use SQLite’s PRAGMA locking_mode=EXCLUSIVE to hold the lock for the entire session, reducing the window for NFS locking bugs:
PRAGMA locking_mode=EXCLUSIVE;
  1. Switch to PostgreSQL or MySQL if multiple processes need concurrent access on a network filesystem. This is the correct solution for shared, multi-process production workloads. SQLite is designed for local, single-application use.

If you’re running into similar locking issues in MySQL, the error message and solutions are different — see MySQL Lock Wait Timeout Exceeded for details on that scenario.

Still Not Working?

If none of the above fixes solved your problem, work through these less common causes:

Check your SQLite version. WAL mode and busy timeout behavior improved significantly in SQLite 3.7.0+ and again in 3.34.0+. Check your version:

SELECT sqlite_version();
import sqlite3
print(sqlite3.sqlite_version)

If you’re on an older version, upgrade. Many Linux distributions ship outdated SQLite libraries.

Check for disk space. SQLite needs space to create journal and WAL files. If the disk is full, transactions fail with a lock error instead of a disk-full error:

df -h .

Check file permissions. SQLite needs write access to the database file and the directory containing it (for creating journal files). This trips people up when they’ve set the file writable but the directory is read-only:

ls -la mydb.sqlite3
ls -la $(dirname mydb.sqlite3)

If you’re seeing file permission errors in Python, that’s a different category of problem — check Python PermissionError: errno 13 for those fixes.

Check for antivirus interference. Some antivirus software on Windows locks database files during scanning, causing intermittent “database is locked” errors. Add your database directory to the antivirus exclusion list.

Consider switching databases. SQLite is excellent for embedded applications, local development, and read-heavy workloads with moderate traffic. But it’s not designed for high-concurrency write workloads. If you’re running a production web app with dozens of concurrent write operations per second, PostgreSQL is the right tool. A connection refused error in PostgreSQL is easier to debug than fighting SQLite’s concurrency model at scale.

Debug with PRAGMA busy_timeout tracing. If you need to find exactly which query is holding the lock, enable SQLite tracing:

conn.set_trace_callback(print)

This prints every SQL statement as it executes, helping you identify the long-running transaction that’s causing contention.

Check your ORM’s transaction management. Django’s default ATOMIC_REQUESTS wraps every HTTP request in a transaction. If a view takes 10 seconds (due to external API calls, slow template rendering, etc.), that transaction holds a lock for the entire request. Disable ATOMIC_REQUESTS and manage transactions manually around the specific queries that need them. The same principle applies if you’re hitting Django OperationalError: no such table — ORM configuration issues compound locking problems.

For Python applications throwing KeyError exceptions during database result processing, that’s typically a code issue rather than a locking issue — see Python KeyError for fixes.

If your Node.js application can’t find the SQLite module itself, that’s a separate installation problem. Check Node Cannot Find Module for that fix.

F

FixDevs

Solo developer based in Japan. Every solution is cross-referenced with official documentation and tested before publishing.

Was this article helpful?

Related Articles