Skip to content

Fix: PostgreSQL ERROR: deadlock detected

FixDevs ·

Quick Answer

How to fix PostgreSQL deadlock detected error caused by concurrent transactions, conflicting row locks, foreign key locks, and lock ordering issues.

The Error

Your application logs show:

ERROR: deadlock detected
DETAIL: Process 1234 waits for ShareLock on transaction 5678; blocked by process 9012.
Process 9012 waits for ShareLock on transaction 1234; blocked by process 1234.
HINT: See server log for query details.
CONTEXT: while updating tuple (0,1) in relation "orders"

Or from your ORM:

OperationalError: deadlock detected
org.postgresql.util.PSQLException: ERROR: deadlock detected

Two or more transactions are waiting for each other to release locks, creating a circular dependency. PostgreSQL detects the deadlock after a timeout and kills one of the transactions to break the cycle.

Why This Happens

A deadlock occurs when:

  1. Transaction A locks Row 1 and then tries to lock Row 2.
  2. Transaction B locks Row 2 and then tries to lock Row 1.
  3. Both are waiting for the other — neither can proceed.

PostgreSQL’s deadlock detector (controlled by deadlock_timeout, default 1 second) periodically checks for these cycles. When it finds one, it aborts one of the transactions with the “deadlock detected” error.

Common causes:

  • Inconsistent lock ordering. Different transactions lock rows in different orders.
  • Long-running transactions. Transactions that hold locks for extended periods increase deadlock chances.
  • Bulk updates on the same table. Multiple concurrent updates touching overlapping rows.
  • Foreign key cascading. Updates to parent tables that cascade to child tables can create unexpected lock patterns.
  • Explicit table locks. Using LOCK TABLE or SELECT ... FOR UPDATE in conflicting patterns.

Fix 1: Lock Rows in a Consistent Order

The most effective fix. If all transactions lock rows in the same order, deadlocks cannot occur:

Broken — inconsistent order:

-- Transaction A:
UPDATE accounts SET balance = balance - 100 WHERE id = 1;
UPDATE accounts SET balance = balance + 100 WHERE id = 2;

-- Transaction B (opposite order):
UPDATE accounts SET balance = balance - 50 WHERE id = 2;
UPDATE accounts SET balance = balance + 50 WHERE id = 1;

Fixed — sort by primary key:

-- Transaction A:
UPDATE accounts SET balance = balance - 100 WHERE id = 1;
UPDATE accounts SET balance = balance + 100 WHERE id = 2;

-- Transaction B (same order):
UPDATE accounts SET balance = balance + 50 WHERE id = 1;
UPDATE accounts SET balance = balance - 50 WHERE id = 2;

In application code:

def transfer(from_id, to_id, amount):
    # Always lock lower ID first
    first_id = min(from_id, to_id)
    second_id = max(from_id, to_id)

    cursor.execute("UPDATE accounts SET balance = balance - %s WHERE id = %s",
                   (amount if first_id == from_id else -amount, first_id))
    cursor.execute("UPDATE accounts SET balance = balance + %s WHERE id = %s",
                   (amount if second_id == to_id else -amount, second_id))

Pro Tip: The easiest ordering strategy is to always process rows sorted by primary key. This guarantees consistent ordering across all transactions without requiring complex logic.

Fix 2: Retry the Failed Transaction

Deadlocks are sometimes unavoidable in high-concurrency systems. The application should catch and retry:

Python:

import time
from psycopg2 import OperationalError

MAX_RETRIES = 3

def execute_with_retry(func):
    for attempt in range(MAX_RETRIES):
        try:
            return func()
        except OperationalError as e:
            if "deadlock detected" in str(e):
                if attempt < MAX_RETRIES - 1:
                    time.sleep(0.1 * (attempt + 1))  # Exponential backoff
                    continue
            raise

Java:

int maxRetries = 3;
for (int attempt = 0; attempt < maxRetries; attempt++) {
    try {
        executeTransaction();
        break;
    } catch (PSQLException e) {
        if (e.getSQLState().equals("40P01") && attempt < maxRetries - 1) {
            Thread.sleep(100 * (attempt + 1));
            continue;
        }
        throw e;
    }
}

Django:

from django.db import OperationalError
from django.db import transaction

@transaction.atomic
def process_order(order_id):
    try:
        order = Order.objects.select_for_update().get(id=order_id)
        order.status = "processed"
        order.save()
    except OperationalError:
        # Retry logic here
        pass

Common Mistake: Retrying without a delay. Two transactions that just deadlocked will likely deadlock again immediately if retried at the same time. Add randomized exponential backoff between retries.

Fix 3: Reduce Transaction Duration

Shorter transactions hold locks for less time, reducing deadlock probability:

Broken — long transaction with user interaction:

cursor.execute("BEGIN")
cursor.execute("SELECT * FROM orders WHERE id = 1 FOR UPDATE")
# ... wait for user input or slow API call ...
cursor.execute("UPDATE orders SET status = 'confirmed' WHERE id = 1")
cursor.execute("COMMIT")

Fixed — minimize lock duration:

# Do slow work BEFORE the transaction
user_input = get_user_confirmation()
api_result = call_external_api()

# Quick transaction with minimal lock time
cursor.execute("BEGIN")
cursor.execute("UPDATE orders SET status = %s WHERE id = 1", (api_result,))
cursor.execute("COMMIT")

Use batch operations instead of row-by-row:

-- Slow — locks one row at a time, holding earlier locks while locking later ones
UPDATE orders SET status = 'shipped' WHERE id = 1;
UPDATE orders SET status = 'shipped' WHERE id = 2;
UPDATE orders SET status = 'shipped' WHERE id = 3;

-- Faster — single statement, single lock acquisition
UPDATE orders SET status = 'shipped' WHERE id IN (1, 2, 3);

Fix 4: Use SELECT FOR UPDATE with SKIP LOCKED

For queue-like patterns where multiple workers process rows concurrently:

-- Worker picks up a row, skipping any that are locked by other workers
BEGIN;
SELECT id, data FROM jobs
WHERE status = 'pending'
ORDER BY created_at
LIMIT 1
FOR UPDATE SKIP LOCKED;

-- Process the job...

UPDATE jobs SET status = 'done' WHERE id = <selected_id>;
COMMIT;

SKIP LOCKED tells PostgreSQL to skip rows that are currently locked by other transactions, instead of waiting (and potentially deadlocking).

Alternative — FOR UPDATE NOWAIT:

SELECT * FROM jobs WHERE id = 1 FOR UPDATE NOWAIT;

NOWAIT raises an error immediately if the row is locked, instead of waiting. Your application can then try a different row.

Fix 5: Fix Foreign Key Deadlocks

Foreign keys create implicit locks on parent tables when child rows are inserted, updated, or deleted. These locks can cause unexpected deadlocks:

-- Transaction A:
INSERT INTO order_items (order_id, product_id) VALUES (1, 100);
-- Locks row in orders table (order_id = 1) with a ShareLock

-- Transaction B:
INSERT INTO order_items (order_id, product_id) VALUES (1, 200);
-- Also needs ShareLock on orders row (order_id = 1)

-- If both also try to update the orders table, deadlock!

Fix: Lock the parent row first:

-- Transaction A:
SELECT * FROM orders WHERE id = 1 FOR UPDATE;
INSERT INTO order_items (order_id, product_id) VALUES (1, 100);

-- Transaction B:
SELECT * FROM orders WHERE id = 1 FOR UPDATE;  -- Waits here, no deadlock
INSERT INTO order_items (order_id, product_id) VALUES (1, 200);

By explicitly locking the parent row first, both transactions queue up on the same lock instead of creating a cycle.

Fix 6: Monitor and Diagnose Deadlocks

Enable deadlock logging:

In postgresql.conf:

log_lock_waits = on
deadlock_timeout = 1s

This logs all lock wait events and deadlocks to the PostgreSQL log.

Query current locks:

SELECT
    pid,
    pg_blocking_pids(pid) AS blocked_by,
    query
FROM pg_stat_activity
WHERE cardinality(pg_blocking_pids(pid)) > 0;

Detailed lock information:

SELECT
    l.pid,
    l.locktype,
    l.mode,
    l.granted,
    a.query
FROM pg_locks l
JOIN pg_stat_activity a ON l.pid = a.pid
WHERE NOT l.granted
ORDER BY l.pid;

Check for idle-in-transaction sessions:

SELECT pid, state, query, now() - xact_start AS duration
FROM pg_stat_activity
WHERE state = 'idle in transaction'
ORDER BY duration DESC;

Idle-in-transaction sessions hold locks indefinitely. Set a timeout to prevent this:

ALTER DATABASE mydb SET idle_in_transaction_session_timeout = '5min';

Fix 7: Use Advisory Locks

For application-level locking that does not conflict with row locks:

-- Acquire an advisory lock (blocks until available)
SELECT pg_advisory_lock(hashtext('process_order_' || order_id::text));

-- Do work...

-- Release the lock
SELECT pg_advisory_unlock(hashtext('process_order_' || order_id::text));

Try-lock variant (non-blocking):

SELECT pg_try_advisory_lock(12345);
-- Returns true if acquired, false if already locked

Advisory locks are not tied to tables or rows. They are application-defined and do not interfere with regular DML operations.

Fix 8: Tune deadlock_timeout

The deadlock_timeout setting controls how long PostgreSQL waits before checking for deadlocks:

SHOW deadlock_timeout;
-- Default: 1s
  • Lower value (100ms-500ms): Detects deadlocks faster, but adds CPU overhead from frequent checks.
  • Higher value (5s-10s): Less overhead, but deadlocked transactions wait longer before being aborted.

For most applications, the default 1 second is appropriate. Only increase it if you have false-positive deadlock detections (unlikely) or decrease it if you need faster deadlock resolution.

Still Not Working?

Check for application-level deadlocks. Your application code might have its own mutexes or locks that conflict with database locks. Review synchronization primitives in your code.

Check for connection pool contention. If your connection pool is exhausted, transactions wait for a connection while holding locks. This can create deadlock-like situations. Increase the pool size or reduce transaction duration.

For PostgreSQL connection issues, see Fix: PostgreSQL connection refused. For role authentication issues, see Fix: PostgreSQL FATAL: role does not exist.

Check for GIN/GiST index locks. Full-text search and geometric operations can create lock contention on index pages. Consider using CONCURRENTLY for index creation.

Consider using serializable isolation. The SERIALIZABLE isolation level detects serialization anomalies and retries, which can sometimes reduce deadlocks but requires proper retry logic:

SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;

For unique constraint violations during concurrent inserts, see Fix: PostgreSQL duplicate key violates unique constraint.

F

FixDevs

Solo developer based in Japan. Every solution is cross-referenced with official documentation and tested before publishing.

Was this article helpful?

Related Articles