Skip to content

Fix: Redis OOM command not allowed when used memory > maxmemory

FixDevs ·

Quick Answer

How to fix Redis OOM command not allowed when used memory exceeds maxmemory caused by memory limits, missing eviction policies, large keys, and memory fragmentation.

The Error

Your application gets this error from Redis:

OOM command not allowed when used memory > 'maxmemory'.

Or from client libraries:

redis.exceptions.ResponseError: OOM command not allowed when used memory > 'maxmemory'.
ReplyError: OOM command not allowed when used memory > 'maxmemory'.
io.lettuce.core.RedisCommandExecutionException: OOM command not allowed when used memory > 'maxmemory'.

Redis has reached its configured memory limit and refuses to accept write commands. Read commands still work, but any command that would increase memory usage is rejected.

Why This Happens

Redis stores all data in memory. When maxmemory is configured and the memory usage exceeds this limit, Redis must either reject new writes or evict existing keys (depending on the maxmemory-policy).

If the eviction policy is noeviction (the default), Redis rejects write commands when memory is full. No data is lost, but your application cannot write new data.

Common causes:

  • maxmemory set too low. The data set has grown beyond the configured limit.
  • maxmemory-policy set to noeviction. Redis cannot free memory by removing keys.
  • Memory leak in application. Keys are added but never expired or deleted.
  • Large keys. A few keys consume most of the memory (large lists, sets, or hashes).
  • Memory fragmentation. Redis uses more OS memory than the logical data size.
  • No TTL on keys. Keys persist forever, accumulating over time.

Fix 1: Increase maxmemory

The quickest fix if you have available server memory:

Check current memory usage:

redis-cli INFO memory
# used_memory_human:3.50G
# maxmemory_human:4.00G
# maxmemory_policy:noeviction

Increase at runtime (no restart needed):

redis-cli CONFIG SET maxmemory 8gb

In redis.conf (permanent):

maxmemory 8gb

Check available system memory first:

free -h
# Make sure Redis maxmemory leaves enough for the OS and other processes

Pro Tip: Set maxmemory to no more than 75% of available RAM. Redis needs extra memory for fork operations (RDB saves, AOF rewrites), background processes, and output buffers. If maxmemory is too close to total RAM, the OS might OOM-kill the Redis process.

Fix 2: Set an Eviction Policy

Change the maxmemory-policy so Redis can free memory by removing keys:

redis-cli CONFIG SET maxmemory-policy allkeys-lru

Available policies:

PolicyDescription
noevictionReturn error on writes (default)
allkeys-lruRemove least recently used keys (recommended for caches)
allkeys-lfuRemove least frequently used keys
allkeys-randomRemove random keys
volatile-lruRemove LRU keys that have a TTL set
volatile-lfuRemove LFU keys that have a TTL set
volatile-ttlRemove keys with shortest TTL first
volatile-randomRemove random keys that have a TTL

For caching workloads:

# redis.conf
maxmemory-policy allkeys-lru

allkeys-lru is the best choice for caches — it removes the least recently accessed keys to make room for new ones.

For session stores:

maxmemory-policy volatile-ttl

This removes only keys with TTLs, preferring those expiring soonest. Keys without TTLs are never evicted.

Common Mistake: Using volatile-* policies when most keys do not have a TTL. These policies only evict keys with an expiration set. If no keys have TTLs, Redis behaves like noeviction and still returns OOM errors.

Fix 3: Find and Remove Large Keys

A few large keys might be consuming most of the memory:

Scan for large keys:

redis-cli --bigkeys
# Shows the largest key of each type:
# Biggest string found 'session:abc123' has 15728640 bytes
# Biggest list found 'events:queue' has 2500000 items

Get memory usage of specific keys:

redis-cli MEMORY USAGE mykey
# (integer) 15728768 — bytes used by this key

Find keys by pattern and check their size:

redis-cli --scan --pattern "cache:*" | head -20

Delete large keys safely (non-blocking):

# UNLINK is async and non-blocking (Redis 4.0+)
redis-cli UNLINK large-key-name

# DEL is synchronous and blocks Redis for large keys
# Avoid DEL on large keys in production

Trim large lists:

# Keep only the last 10000 items
redis-cli LTRIM events:queue -10000 -1

Fix 4: Set TTL on Keys

Keys without expiration accumulate forever:

Check keys without TTL:

redis-cli TTL mykey
# -1 means no expiration
# -2 means the key doesn't exist
# positive number is seconds remaining

Set TTL when creating keys:

# Python
redis_client.setex("cache:user:123", 3600, user_data)  # Expires in 1 hour
redis_client.set("cache:user:123", user_data, ex=3600)  # Same thing
// Node.js
await redis.set("cache:user:123", userData, "EX", 3600);
// Go
rdb.Set(ctx, "cache:user:123", userData, time.Hour)

Add TTL to existing keys:

redis-cli EXPIRE cache:user:123 3600

Find keys without TTL and set defaults:

# Bash script to add TTL to all keys matching a pattern
redis-cli --scan --pattern "cache:*" | while read key; do
    ttl=$(redis-cli TTL "$key")
    if [ "$ttl" = "-1" ]; then
        redis-cli EXPIRE "$key" 86400  # 24 hours
    fi
done

Fix 5: Optimize Data Structures

Use more memory-efficient data structures:

Use hashes instead of individual keys for small objects:

# Inefficient — one key per field
SET user:123:name "Alice"
SET user:123:email "[email protected]"
SET user:123:age "30"
# ~3 keys × overhead per key

# Efficient — one hash
HSET user:123 name "Alice" email "[email protected]" age "30"
# 1 key, compact encoding for small hashes

Enable ziplist encoding for small data structures:

# redis.conf — small hashes use ziplist (very memory-efficient)
hash-max-ziplist-entries 128
hash-max-ziplist-value 64

# Small lists use listpack
list-max-ziplist-size -2

# Small sets use listpack
set-max-intset-entries 512

Compress large string values:

import zlib
import redis

r = redis.Redis()

# Compress before storing
data = zlib.compress(large_json_string.encode())
r.set("data:large", data, ex=3600)

# Decompress when reading
compressed = r.get("data:large")
original = zlib.decompress(compressed).decode()

Fix 6: Fix Memory Fragmentation

Redis might use more OS memory than the logical data size:

redis-cli INFO memory
# mem_fragmentation_ratio:1.8
# A ratio > 1.5 indicates significant fragmentation

Enable active defragmentation (Redis 4.0+):

# redis.conf
activedefrag yes
active-defrag-enabled yes
active-defrag-ignore-bytes 100mb
active-defrag-threshold-lower 10
active-defrag-threshold-upper 100

Or restart Redis to eliminate fragmentation (requires persistence configured).

Use jemalloc (the default allocator, but verify):

redis-cli INFO memory
# mem_allocator:jemalloc-5.3.0

Fix 7: Monitor Memory Usage

Set up proactive monitoring:

# Current memory stats
redis-cli INFO memory

# Key metrics to monitor:
# used_memory — total allocated by Redis
# used_memory_rss — total memory from OS perspective
# maxmemory — configured limit
# evicted_keys — number of keys evicted (if using eviction policy)
# mem_fragmentation_ratio — RSS / used_memory

Set up alerts:

import redis

r = redis.Redis()
info = r.info("memory")

used_mb = info["used_memory"] / 1024 / 1024
max_mb = info["maxmemory"] / 1024 / 1024
usage_pct = (used_mb / max_mb) * 100

if usage_pct > 80:
    send_alert(f"Redis memory at {usage_pct:.1f}%: {used_mb:.0f}MB / {max_mb:.0f}MB")

Fix 8: Scale Redis

When a single Redis instance is not enough:

Redis Cluster (horizontal scaling):

# Data is sharded across multiple nodes
redis-cli --cluster create node1:6379 node2:6379 node3:6379 \
    --cluster-replicas 1

Read replicas (offload reads):

# On the replica
replicaof primary-host 6379

Application-level sharding:

# Route keys to different Redis instances
def get_redis(key):
    shard = hash(key) % len(redis_instances)
    return redis_instances[shard]

Still Not Working?

Check for client output buffer limits. Large MONITOR or Pub/Sub connections consume memory:

redis-cli CLIENT LIST
# Check for clients with large output buffers (omem)

Check for Lua script memory. Lua scripts can allocate significant memory inside Redis.

Check for replica output buffers. During replication, the output buffer for replicas can grow large:

client-output-buffer-limit replica 256mb 64mb 60

For Redis connection issues, see Fix: Redis connection refused. For Redis type errors, see Fix: Redis WRONGTYPE Operation.

F

FixDevs

Solo developer based in Japan. Every solution is cross-referenced with official documentation and tested before publishing.

Was this article helpful?

Related Articles