Skip to content

Fix: MySQL Deadlock Found When Trying to Get Lock

FixDevs ·

Quick Answer

How to fix MySQL 'Deadlock found when trying to get lock; try restarting transaction' — diagnosing deadlock causes, using SHOW ENGINE INNODB STATUS, and preventing deadlocks with consistent lock ordering.

The Error

Your application throws:

Deadlock found when trying to get lock; try restarting transaction

Or in full:

SQLSTATE[40001]: Serialization failure: 1213 Deadlock found when trying to get lock;
try restarting transaction

The operation fails, and depending on your application code, either the transaction is retried or the request fails entirely.

Why This Happens

A deadlock occurs when two (or more) transactions each hold a lock the other needs, creating a circular wait:

Transaction A holds lock on Row 1, waits for lock on Row 2
Transaction B holds lock on Row 2, waits for lock on Row 1
→ Neither can proceed — deadlock

MySQL’s InnoDB storage engine detects this automatically and kills one transaction (the “victim”) to break the deadlock, returning the 1213 error to that transaction.

Common causes:

  • Inconsistent lock ordering — Transaction A locks rows in order 1→2, Transaction B locks 2→1.
  • Large transactions holding locks for a long time — longer lock hold times increase the chance of conflict.
  • Gap locks and next-key locks — InnoDB locks gaps between rows to prevent phantom reads, which can cause unexpected deadlocks with INSERT statements.
  • Missing indexes — a query without an index locks the entire table instead of specific rows.
  • ON DUPLICATE KEY UPDATE and INSERT ... SELECT — these acquire multiple locks and are deadlock-prone.
  • Bulk operations — multiple rows updated in different orders by concurrent transactions.

Fix 1: Diagnose with SHOW ENGINE INNODB STATUS

Always start with the InnoDB status output — it shows the last deadlock in detail:

SHOW ENGINE INNODB STATUS\G

Look for the LATEST DETECTED DEADLOCK section:

------------------------
LATEST DETECTED DEADLOCK
------------------------
2026-03-15 10:23:45 0x7f...
*** (1) TRANSACTION:
TRANSACTION 421398, ACTIVE 0 sec starting index read
MySQL thread id 42, OS thread handle 0x7f..., query id 1234 localhost app
UPDATE orders SET status = 'processing' WHERE id = 100

*** (1) HOLDS THE LOCK(S):
RECORD LOCKS space id 45 page no 3 n bits 72 index PRIMARY of table `mydb`.`orders`
trx id 421398 lock_mode X locks rec but not gap

*** (1) WAITING FOR THIS LOCK TO BE GRANTED:
RECORD LOCKS space id 46 page no 5 n bits 72 index PRIMARY of table `mydb`.`order_items`

*** (2) TRANSACTION:
TRANSACTION 421399, ACTIVE 0 sec fetching rows
UPDATE order_items SET quantity = 2 WHERE order_id = 100

*** (2) HOLDS THE LOCK(S):
RECORD LOCKS ... table `mydb`.`order_items`

*** (2) WAITING FOR THIS LOCK TO BE GRANTED:
RECORD LOCKS ... table `mydb`.`orders`

*** WE ROLL BACK TRANSACTION (2)

This shows exactly which transactions conflicted, which rows were locked, and which transaction was killed. Use this to understand the lock ordering issue.

Fix 2: Use Consistent Lock Ordering

The most effective prevention: always acquire locks in the same order across all transactions:

Broken — inconsistent order causes deadlock:

-- Transaction A: locks orders first, then order_items
BEGIN;
UPDATE orders SET status = 'processing' WHERE id = 100;
UPDATE order_items SET quantity = 2 WHERE order_id = 100;
COMMIT;

-- Transaction B (concurrent): locks order_items first, then orders
BEGIN;
UPDATE order_items SET quantity = 3 WHERE order_id = 100;
UPDATE orders SET status = 'confirmed' WHERE id = 100;
COMMIT;
-- Deadlock: A holds orders, waits for order_items
--           B holds order_items, waits for orders

Fixed — consistent order (always lock orders before order_items):

-- Transaction A
BEGIN;
UPDATE orders SET status = 'processing' WHERE id = 100;
UPDATE order_items SET quantity = 2 WHERE order_id = 100;
COMMIT;

-- Transaction B — same order
BEGIN;
UPDATE orders SET status = 'confirmed' WHERE id = 100;  -- Lock orders first
UPDATE order_items SET quantity = 3 WHERE order_id = 100;
COMMIT;
-- No deadlock: both wait for orders lock, one proceeds when the other commits

Application-level ordering for bulk operations:

# When updating multiple rows, always sort by primary key
def update_items(item_ids, new_value):
    sorted_ids = sorted(item_ids)  # Consistent order
    for item_id in sorted_ids:
        db.execute("UPDATE items SET value = %s WHERE id = %s", (new_value, item_id))

Why this works: A deadlock requires a cycle in the lock wait graph. If all transactions acquire locks in the same order, no cycle can form — a transaction waiting for a lock always waits for one held by a transaction that will eventually finish, not one waiting for it.

Fix 3: Implement Retry Logic for Deadlocks

Deadlocks are expected in high-concurrency systems. The right response is to retry the transaction — MySQL itself recommends this in the error message (“try restarting transaction”):

Python with SQLAlchemy:

from sqlalchemy.exc import OperationalError
import time

def execute_with_retry(session, operation, max_retries=3):
    for attempt in range(max_retries):
        try:
            result = operation(session)
            session.commit()
            return result
        except OperationalError as e:
            if e.orig.args[0] == 1213:  # Deadlock error code
                session.rollback()
                if attempt < max_retries - 1:
                    time.sleep(0.1 * (2 ** attempt))  # Exponential backoff
                    continue
            raise  # Re-raise non-deadlock errors
    raise Exception("Max retries exceeded after deadlock")

Node.js with mysql2:

async function executeWithRetry(connection, queryFn, maxRetries = 3) {
  for (let attempt = 0; attempt < maxRetries; attempt++) {
    try {
      await connection.beginTransaction();
      const result = await queryFn(connection);
      await connection.commit();
      return result;
    } catch (err) {
      await connection.rollback();
      if (err.errno === 1213 && attempt < maxRetries - 1) {
        // Exponential backoff before retry
        await new Promise(resolve => setTimeout(resolve, 100 * Math.pow(2, attempt)));
        continue;
      }
      throw err;
    }
  }
}

Retry with exponential backoff gives the competing transaction time to complete before retrying, reducing the chance of hitting the same deadlock again.

Fix 4: Add Indexes to Reduce Lock Scope

Without proper indexes, InnoDB may lock more rows (or the entire table) than necessary, increasing deadlock chances:

-- Without index on order_id, this locks all rows scanned
UPDATE order_items SET quantity = 2 WHERE order_id = 100;
-- InnoDB may lock ALL rows in order_items to find matching ones

-- Check if the index exists
SHOW INDEX FROM order_items;

-- Add the missing index
ALTER TABLE order_items ADD INDEX idx_order_id (order_id);

-- Now the UPDATE only locks matching rows — much smaller lock scope

Find queries causing full table scans:

EXPLAIN UPDATE order_items SET quantity = 2 WHERE order_id = 100;
-- Look for: type = 'ALL' (full scan) — needs an index
-- Good: type = 'ref' or 'eq_ref' — uses index

Fix 5: Keep Transactions Short

Long transactions hold locks for a long time, increasing the window for conflicts:

Broken — slow work inside a transaction:

BEGIN;
-- Fetch data
SELECT * FROM orders WHERE status = 'pending';

-- Application does slow processing here (seconds)
-- All rows from the SELECT are locked during this time

UPDATE orders SET status = 'processing' WHERE id = 100;
COMMIT;

Fixed — do slow work outside the transaction:

# Do computation outside the transaction
pending_orders = db.query("SELECT * FROM orders WHERE status = 'pending'")
order_ids_to_process = [compute_eligibility(o) for o in pending_orders]

# Only lock rows during the actual update
with db.transaction():
    for order_id in order_ids_to_process:
        db.execute("UPDATE orders SET status = 'processing' WHERE id = %s", order_id)

Minimize transaction scope:

-- Instead of one large transaction:
BEGIN;
UPDATE table_a SET ...;
-- slow join or subquery
UPDATE table_b SET ... WHERE id IN (SELECT id FROM table_a WHERE ...);
COMMIT;

-- Break into smaller transactions where possible:
-- Process table_a first, commit, then process table_b

Fix 6: Use SELECT … FOR UPDATE to Control Lock Timing

Explicitly acquiring locks with SELECT ... FOR UPDATE at the start of a transaction can prevent deadlocks by making lock order predictable:

-- Acquire locks in a known order at the START of the transaction
BEGIN;

-- Lock both rows upfront before any updates
SELECT id FROM orders WHERE id = 100 FOR UPDATE;
SELECT id FROM order_items WHERE order_id = 100 FOR UPDATE;

-- Now update — locks are already held
UPDATE orders SET status = 'processing' WHERE id = 100;
UPDATE order_items SET quantity = 2 WHERE order_id = 100;

COMMIT;

By acquiring all needed locks immediately at the start of the transaction (in a consistent order), you reduce the window for deadlocks and make the locking explicit.

Use SKIP LOCKED for queue-style processing:

-- Each worker grabs the next available row without waiting for locks
BEGIN;
SELECT id FROM jobs
WHERE status = 'pending'
ORDER BY created_at
LIMIT 1
FOR UPDATE SKIP LOCKED;

-- Process the selected row
UPDATE jobs SET status = 'processing' WHERE id = @selected_id;
COMMIT;

SKIP LOCKED skips rows that are already locked instead of waiting — ideal for job queues where multiple workers process different rows concurrently.

Fix 7: Monitor and Tune InnoDB Lock Settings

-- Check current lock wait timeout (default: 50 seconds)
SHOW VARIABLES LIKE 'innodb_lock_wait_timeout';

-- Reduce to fail faster and retry sooner (e.g., 5 seconds)
SET GLOBAL innodb_lock_wait_timeout = 5;

-- Check deadlock count
SHOW STATUS LIKE 'Innodb_deadlocks';

-- Enable deadlock logging for ongoing monitoring
SET GLOBAL innodb_print_all_deadlocks = ON;
-- Deadlocks now appear in the MySQL error log (/var/log/mysql/error.log)

Track deadlock frequency over time:

-- Store deadlock counts periodically
CREATE TABLE deadlock_stats (
  recorded_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
  deadlock_count BIGINT
);

-- Insert current count (run via cron or monitoring)
INSERT INTO deadlock_stats (deadlock_count)
SELECT VARIABLE_VALUE FROM performance_schema.global_status
WHERE VARIABLE_NAME = 'Innodb_deadlocks';

Still Not Working?

Check for INSERT ... ON DUPLICATE KEY UPDATE deadlocks. This statement acquires a shared lock then upgrades to exclusive, which is deadlock-prone. Consider using INSERT IGNORE or application-level duplicate checking.

Check for implicit commits. DDL statements (ALTER TABLE, CREATE INDEX) cause an implicit commit and can release locks unexpectedly, changing transaction boundaries.

Consider using lower isolation levels. READ COMMITTED instead of REPEATABLE READ (MySQL default) uses less aggressive locking — it does not use gap locks, which eliminates a class of deadlocks:

SET SESSION TRANSACTION ISOLATION LEVEL READ COMMITTED;

Trade-off: READ COMMITTED allows phantom reads (seeing new rows inserted by other transactions within the same transaction).

For lock wait timeout errors (different from deadlocks), see Fix: MySQL Lock Wait Timeout Exceeded. For too many connections, see Fix: MySQL Too Many Connections.

F

FixDevs

Solo developer based in Japan. Every solution is cross-referenced with official documentation and tested before publishing.

Was this article helpful?

Related Articles