Fix: Python RecursionError - maximum recursion depth exceeded
Quick Answer
Resolve Python's RecursionError by converting recursion to iteration, increasing the recursion limit, fixing infinite loops, and using tail-call optimization patterns.
The Error
You run a Python script or call a function, and Python crashes with this:
Traceback (most recent call last):
File "app.py", line 5, in <module>
result = factorial(10000)
File "app.py", line 3, in factorial
return n * factorial(n - 1)
File "app.py", line 3, in factorial
return n * factorial(n - 1)
File "app.py", line 3, in factorial
return n * factorial(n - 1)
[Previous line repeated 996 more times]
RecursionError: maximum recursion depth exceededAnother common variation:
RecursionError: maximum recursion depth exceeded while calling a Python objectOr when comparing or printing objects:
RecursionError: maximum recursion depth exceeded while getting the repr of an objectPython is telling you that a function has called itself (directly or indirectly) too many times. Python has a built-in limit on how deep the call stack can grow — by default, 1000 frames. When your recursive function exceeds that limit, Python raises RecursionError instead of letting the call stack grow until it consumes all available memory and crashes the process.
Why This Happens
Python does not optimize recursive calls the way some other languages do. Every function call adds a new frame to the call stack, and that frame stays on the stack until the function returns. When a function calls itself recursively, each call adds another frame. If the recursion goes deep enough, the stack hits Python’s limit.
There are three common reasons this happens:
The recursion is too deep for the data. Your recursive logic is correct, but the input is too large. A recursive factorial function works fine for
factorial(100)but blows up forfactorial(10000)because it needs 10,000 stack frames.Infinite recursion from a bug. Your base case is wrong, missing, or unreachable, so the function never stops calling itself. This is the most common cause.
Unintentional recursion. You accidentally created a cycle — a
__repr__method that triggers itself, two objects that reference each other during serialization, or a property that calls itself instead of the underlying attribute.
Python’s default recursion limit is 1000 (you can check it with sys.getrecursionlimit()). This is intentionally conservative. The actual C stack that Python runs on can usually handle more, but the limit exists as a safety net to catch infinite recursion early rather than letting it crash the interpreter.
If you are hitting this error in a Django or Flask application, the cause is often circular object references during serialization or model relationships that recursively load related objects. If the problem is related to circular imports rather than recursive calls, see Fix: ImportError: cannot import name from partially initialized module (circular import).
Fix 1: Add or Fix the Base Case
The most common cause of RecursionError is a missing or incorrect base case. Every recursive function needs a condition that stops the recursion.
Broken (no base case):
def countdown(n):
print(n)
countdown(n - 1) # never stopsFixed:
def countdown(n):
if n <= 0:
print("Done!")
return
print(n)
countdown(n - 1)Broken (base case is unreachable):
def search(items, target, index=0):
if index == len(items):
return -1
if items[index] == target:
return index
return search(items, target, index) # Bug: index never incrementsFixed:
def search(items, target, index=0):
if index == len(items):
return -1
if items[index] == target:
return index
return search(items, target, index + 1) # Increment indexIf you suspect infinite recursion but can’t see the bug, add a temporary depth counter:
def my_recursive_fn(data, _depth=0):
if _depth > 50:
raise RuntimeError(f"Unexpected recursion depth: {_depth}, data={data!r}")
# ... your logic ...
return my_recursive_fn(next_data, _depth=_depth + 1)This will give you a clear error message showing exactly what data is causing the runaway recursion.
Fix 2: Convert Recursion to Iteration
If your recursion is correct but the input is simply too large for Python’s call stack, the most robust fix is to rewrite the function iteratively. Any recursive algorithm can be converted to an iterative one.
Recursive (breaks for large n):
def factorial(n):
if n <= 1:
return 1
return n * factorial(n - 1)Iterative (works for any n):
def factorial(n):
result = 1
for i in range(2, n + 1):
result *= i
return resultRecursive Fibonacci (breaks for large n, also extremely slow):
def fibonacci(n):
if n <= 1:
return n
return fibonacci(n - 1) + fibonacci(n - 2)Iterative Fibonacci:
def fibonacci(n):
if n <= 1:
return n
a, b = 0, 1
for _ in range(2, n + 1):
a, b = b, a + b
return bThe iterative versions use a constant amount of stack space regardless of input size. They are also usually faster because they avoid the overhead of function calls.
Pro Tip: When converting recursion to iteration, identify what changes between recursive calls. Those become your loop variables. The base case becomes your loop termination condition. The recursive call becomes the loop body that updates the variables.
Fix 3: Use sys.setrecursionlimit to Increase the Limit
If you know your recursion is correct and finite, but it simply exceeds the default limit of 1000, you can raise the limit:
import sys
sys.setrecursionlimit(5000)Check the current limit first:
import sys
print(sys.getrecursionlimit()) # Default: 1000This is a quick fix, but it comes with caveats. The limit exists for a reason. Setting it too high risks crashing the Python interpreter with a segmentation fault if you exhaust the actual C stack. A safe upper bound depends on your platform and thread stack size, but values up to 10,000 are usually fine on modern systems. Going beyond 50,000 is risky.
If you are on a system with limited stack space (such as a thread or an embedded environment), you may need to increase the OS thread stack size as well:
import sys
import threading
sys.setrecursionlimit(10000)
threading.stack_size(16 * 1024 * 1024) # 16 MB stackUse sys.setrecursionlimit as a temporary measure or when you have a well-understood algorithm with a known maximum depth. For anything else, convert to iteration.
Fix 4: Add Memoization with functools.lru_cache
If your recursive function recalculates the same inputs many times (like naive Fibonacci), memoization can both fix the recursion depth issue and dramatically improve performance.
from functools import lru_cache
@lru_cache(maxsize=None)
def fibonacci(n):
if n <= 1:
return n
return fibonacci(n - 1) + fibonacci(n - 2)Without lru_cache, fibonacci(100) makes approximately 2^100 recursive calls and will never finish. With lru_cache, it makes exactly 100 recursive calls because each unique value of n is computed only once and then cached.
However, memoization alone does not eliminate the recursion depth problem for large inputs. fibonacci(5000) will still hit the recursion limit because the first call must recurse all the way down to fibonacci(0) before any caching happens. To work around this, build up the cache iteratively first:
from functools import lru_cache
@lru_cache(maxsize=None)
def fibonacci(n):
if n <= 1:
return n
return fibonacci(n - 1) + fibonacci(n - 2)
# Warm the cache in small steps to avoid hitting the recursion limit
for i in range(0, 5001, 500):
fibonacci(i)
# Now this works without deep recursion
print(fibonacci(5000))By calling fibonacci with incrementally larger values, each call only needs to recurse a few hundred levels before hitting a cached result.
Fix 5: Fix Infinite Recursion in __repr__ and __str__
A common source of RecursionError is a __repr__ or __str__ method that accidentally triggers itself, usually through circular object references.
Broken:
class Node:
def __init__(self, value, parent=None):
self.value = value
self.parent = parent
self.children = []
def __repr__(self):
return f"Node({self.value}, children={self.children})"If a child node has a reference back to its parent, calling repr() on the parent prints its children, each child prints itself (which includes its relationship back to the parent or other children), and you get infinite recursion.
Fixed:
class Node:
def __init__(self, value, parent=None):
self.value = value
self.parent = parent
self.children = []
def __repr__(self):
# Don't recursively repr children -- just show their count or values
child_values = [c.value for c in self.children]
return f"Node({self.value}, children={child_values})"The same problem occurs with __str__, __eq__, and __hash__ methods. Any method that is called implicitly by Python during printing, comparison, or hashing can accidentally trigger recursion if the object graph has cycles.
If you are using dataclasses, be aware that the auto-generated __repr__ will recursively repr all fields:
from dataclasses import dataclass, field
from typing import Optional
@dataclass
class Employee:
name: str
manager: Optional["Employee"] = None
reports: list["Employee"] = field(default_factory=list, repr=False) # repr=False breaks the cycleSetting repr=False on the field that creates the cycle prevents the auto-generated __repr__ from following it.
Fix 6: Use a Stack-Based Approach for Tree and Graph Traversal
Recursive tree and graph traversal is elegant but breaks on deep or wide structures. Convert to an explicit stack.
Recursive DFS (breaks on deep trees):
def find_value(node, target):
if node is None:
return False
if node.value == target:
return True
return find_value(node.left, target) or find_value(node.right, target)Stack-based DFS (handles any depth):
def find_value(root, target):
stack = [root]
while stack:
node = stack.pop()
if node is None:
continue
if node.value == target:
return True
stack.append(node.left)
stack.append(node.right)
return FalseFor graph traversal where nodes may form cycles, always track visited nodes:
def traverse_graph(start):
visited = set()
stack = [start]
result = []
while stack:
node = stack.pop()
if node.id in visited:
continue
visited.add(node.id)
result.append(node)
for neighbor in node.neighbors:
if neighbor.id not in visited:
stack.append(neighbor)
return resultThe same pattern applies to BFS — just replace the stack (list with pop()) with a queue (collections.deque with popleft()):
from collections import deque
def bfs(root):
queue = deque([root])
visited = set()
while queue:
node = queue.popleft()
if node.id in visited:
continue
visited.add(node.id)
for child in node.children:
queue.append(child)If you are working with deeply nested data structures like JSON or XML, this same stack-based approach applies. Do not recurse into nested dictionaries or lists — iterate with an explicit stack.
Fix 7: Handle Recursive Data Structures (Nested Dicts, JSON)
Processing deeply nested JSON, configuration files, or recursive data structures is a common trigger for RecursionError. Use an explicit stack to flatten or transform them.
Recursive (breaks on deeply nested data):
def flatten(data, prefix=""):
result = {}
for key, value in data.items():
full_key = f"{prefix}.{key}" if prefix else key
if isinstance(value, dict):
result.update(flatten(value, full_key))
else:
result[full_key] = value
return resultIterative (handles any nesting depth):
def flatten(data):
result = {}
stack = [("", data)]
while stack:
prefix, current = stack.pop()
for key, value in current.items():
full_key = f"{prefix}.{key}" if prefix else key
if isinstance(value, dict):
stack.append((full_key, value))
else:
result[full_key] = value
return resultCommon Mistake: Increasing
sys.setrecursionlimitto handle deeply nested data is a band-aid that will eventually fail on deeper input. If you are processing user-supplied data (like JSON from an API), you have no control over the nesting depth. Always use iteration for user-controlled data structures.
Fix 8: Fix Django and Flask Model Recursion
Web frameworks like Django and Flask commonly trigger RecursionError when models with relationships serialize themselves or when related objects load each other in an infinite loop.
Django model with circular relationship:
# models.py
from django.db import models
class Category(models.Model):
name = models.CharField(max_length=100)
parent = models.ForeignKey("self", null=True, blank=True, on_delete=models.CASCADE)
def to_dict(self):
return {
"name": self.name,
"parent": self.parent.to_dict() if self.parent else None,
"children": [child.to_dict() for child in self.children.all()],
}If categories have circular references (A is parent of B, and through some logic B references A), to_dict() recurses infinitely. Even without circular references, a very deep category hierarchy will hit the limit.
Fixed with depth limiting:
class Category(models.Model):
name = models.CharField(max_length=100)
parent = models.ForeignKey("self", null=True, blank=True, on_delete=models.CASCADE,
related_name="children")
def to_dict(self, max_depth=5):
data = {"name": self.name}
if max_depth > 0:
data["children"] = [
child.to_dict(max_depth=max_depth - 1)
for child in self.children.all()
]
else:
data["children"] = []
return dataFor Django REST Framework serializers that reference each other, use depth limiting or switch to explicit serializer fields. If your Django app is crashing with database errors alongside recursion issues, check that your migrations and tables are in order.
Flask with SQLAlchemy circular serialization:
# Broken: User.to_dict calls Post.to_dict, which calls User.to_dict
class User(db.Model):
id = db.Column(db.Integer, primary_key=True)
name = db.Column(db.String(100))
posts = db.relationship("Post", backref="author")
def to_dict(self):
return {
"id": self.id,
"name": self.name,
"posts": [post.to_dict() for post in self.posts],
}
class Post(db.Model):
id = db.Column(db.Integer, primary_key=True)
title = db.Column(db.String(200))
author_id = db.Column(db.Integer, db.ForeignKey("user.id"))
def to_dict(self):
return {
"id": self.id,
"title": self.title,
"author": self.author.to_dict(), # Recursion: calls User.to_dict again
}Fixed by controlling what gets serialized:
class User(db.Model):
# ... fields ...
def to_dict(self, include_posts=True):
data = {"id": self.id, "name": self.name}
if include_posts:
data["posts"] = [post.to_dict(include_author=False) for post in self.posts]
return data
class Post(db.Model):
# ... fields ...
def to_dict(self, include_author=True):
data = {"id": self.id, "title": self.title}
if include_author:
data["author"] = self.author.to_dict(include_posts=False)
return dataBy passing flags that prevent the reverse relationship from being serialized, you break the cycle.
Fix 9: Use Tail-Call Optimization Patterns
Python does not support automatic tail-call optimization, but you can simulate it by converting tail-recursive functions into a trampoline pattern.
A tail-recursive function is one where the recursive call is the very last operation. Instead of actually recursing, the trampoline pattern returns a thunk (a function that performs the next step) and a loop drives the execution.
Tail-recursive (still hits the limit in Python):
def sum_to(n, accumulator=0):
if n <= 0:
return accumulator
return sum_to(n - 1, accumulator + n)Trampoline pattern (no recursion limit):
class TailCall:
def __init__(self, fn, *args):
self.fn = fn
self.args = args
def trampoline(fn, *args):
result = fn(*args)
while isinstance(result, TailCall):
result = result.fn(*result.args)
return result
def sum_to(n, accumulator=0):
if n <= 0:
return accumulator
return TailCall(sum_to, n - 1, accumulator + n)
# Usage:
result = trampoline(sum_to, 1000000)The trampoline turns recursion into a loop. Each “recursive call” returns a TailCall object instead of actually calling the function. The trampoline function loops, calling each successive thunk until a real result is returned. This uses constant stack space.
This pattern is more complex than a simple loop, so prefer straightforward iteration when possible. The trampoline is most useful when the recursive logic is complex and hard to convert to a simple for or while loop.
Fix 10: Handle Recursive Serialization (JSON, Pickle)
Serializing objects with circular references causes RecursionError in json.dumps, pickle.dumps, repr, and similar functions.
Broken:
import json
a = {"name": "a"}
b = {"name": "b", "ref": a}
a["ref"] = b # Circular reference
json.dumps(a) # RecursionErrorFix with a custom encoder that tracks seen objects:
import json
class SafeEncoder(json.JSONEncoder):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self._seen = set()
def default(self, obj):
obj_id = id(obj)
if obj_id in self._seen:
return f"<circular ref: {type(obj).__name__}>"
self._seen.add(obj_id)
return super().default(obj)
# For dicts specifically, handle it before passing to json:
def remove_circular_refs(obj, _seen=None):
if _seen is None:
_seen = set()
obj_id = id(obj)
if obj_id in _seen:
return "<circular reference>"
_seen.add(obj_id)
if isinstance(obj, dict):
return {k: remove_circular_refs(v, _seen) for k, v in obj.items()}
if isinstance(obj, (list, tuple)):
return [remove_circular_refs(item, _seen) for item in obj]
return obj
a = {"name": "a"}
b = {"name": "b", "ref": a}
a["ref"] = b
clean = remove_circular_refs(a)
print(json.dumps(clean, indent=2))For pickle, circular references are handled automatically. But if you are implementing custom __getstate__ or __reduce__ methods, make sure they do not follow circular references.
If your serialization issues involve objects that Python cannot find a module for, you might also want to check that your module paths are set up correctly.
Fix 11: Debug Recursion Depth
When you cannot figure out why you are hitting the recursion limit, these debugging techniques help.
Check the current recursion depth at any point:
import sys
def problematic_function(data):
frame = sys._getframe()
depth = 0
while frame is not None:
depth += 1
frame = frame.f_back
print(f"Current stack depth: {depth}")
# ... rest of the functionUse a decorator to monitor recursion depth:
import functools
def track_recursion(fn):
@functools.wraps(fn)
def wrapper(*args, **kwargs):
wrapper.depth += 1
if wrapper.depth > wrapper.max_depth:
wrapper.max_depth = wrapper.depth
try:
return fn(*args, **kwargs)
finally:
wrapper.depth -= 1
wrapper.depth = 0
wrapper.max_depth = 0
return wrapper
@track_recursion
def my_recursive_fn(n):
if n <= 0:
return 0
return my_recursive_fn(n - 1)
my_recursive_fn(500)
print(f"Maximum recursion depth reached: {my_recursive_fn.max_depth}")Inspect the traceback to find the cycle:
When you get the RecursionError, Python truncates the traceback to avoid printing thousands of lines. Look at the repeated lines — they tell you exactly which functions are calling each other in the cycle. If you see two or three functions alternating, you have an indirect recursion cycle.
You can also print the full traceback programmatically:
import traceback
import sys
try:
problematic_function()
except RecursionError:
traceback.print_exc(limit=20) # Print last 20 framesIf the error occurs during the import phase of your program rather than during function execution, the problem may be a circular import rather than actual recursion.
Fix 12: Avoid Recursion in Properties and Descriptors
A subtle source of RecursionError is a property that accidentally calls itself.
Broken:
class User:
def __init__(self, name):
self.name = name # This calls the setter, which calls the setter...
@property
def name(self):
return self.name # Calls itself -- infinite recursion
@name.setter
def name(self, value):
self.name = value # Calls itself -- infinite recursionThe problem is that self.name inside the getter calls the getter again because name is a property. The same happens in the setter.
Fixed with a private attribute:
class User:
def __init__(self, name):
self.name = name # Calls the setter, which sets _name
@property
def name(self):
return self._name # Access the underlying attribute
@name.setter
def name(self, value):
if not value:
raise ValueError("Name cannot be empty")
self._name = value # Set the underlying attributeThe convention is to use a leading underscore (_name) for the backing attribute. The property name provides the public interface, and _name stores the actual value.
The same issue occurs with __getattr__ and __setattr__:
# Broken: __getattr__ calls __getattr__ endlessly
class Config:
def __getattr__(self, name):
return self.defaults[name] # self.defaults triggers __getattr__ again
# Fixed: use __dict__ or object.__getattribute__ to bypass __getattr__
class Config:
def __init__(self):
self.__dict__["defaults"] = {"debug": False}
def __getattr__(self, name):
return self.__dict__["defaults"].get(name)If your Python code has indentation errors that place code inside the wrong block, it can sometimes create unexpected recursive calls. Make sure your function structure matches your intent.
Still Not Working?
Check your actual recursion depth
Before increasing the limit blindly, figure out how deep your recursion actually needs to go:
import sys
print(f"Current limit: {sys.getrecursionlimit()}")
# Estimate needed depth for your data
print(f"Data size: {len(your_data)}")
print(f"Tree depth: {calculate_depth(your_tree)}")If the needed depth is in the thousands, converting to iteration is the right call. If it is just slightly over 1000, raising the limit with sys.setrecursionlimit is reasonable.
Recursion in threads
Threads in Python have a smaller default stack size. If your recursion works in the main thread but fails in a thread, you need to increase the thread stack size:
import threading
threading.stack_size(8 * 1024 * 1024) # 8 MB
thread = threading.Thread(target=recursive_function, args=(data,))
thread.start()Third-party library causing recursion
If the RecursionError traceback points to a library you did not write, the problem is usually in how you configured or used the library. Common culprits:
- ORM serialization: Django or SQLAlchemy models with bidirectional relationships being serialized (see Fix 8).
- Marshmallow or Pydantic schemas: Nested schemas that reference each other need
Nested("SchemaName")with string references or explicitexcludelists. - Beautiful Soup or lxml: Parsing malformed HTML with extremely deep nesting. Use
html.parseror set a recursion limit in the parser.
The nuclear option: @functools.cache + cache warming
If you absolutely must keep a recursive implementation for readability, combine caching with incremental warm-up:
import sys
from functools import cache
sys.setrecursionlimit(2000)
@cache
def expensive_recursive_fn(n):
if n <= 1:
return base_case(n)
return combine(expensive_recursive_fn(n - 1), expensive_recursive_fn(n - 2))
# Warm the cache in steps smaller than the recursion limit
step = 500
for i in range(0, target_n + 1, step):
expensive_recursive_fn(i)
result = expensive_recursive_fn(target_n)This is a pragmatic approach when refactoring the recursion into iteration would obscure the algorithm. But for production code that handles variable or user-supplied input sizes, iteration is always more reliable.
Related: If Python itself is not running, check that your Python installation is set up correctly. If modules are failing to import before your recursive code can even run, see Fix: ModuleNotFoundError: No module named in Python.
Solo developer based in Japan. Every solution is cross-referenced with official documentation and tested before publishing.
Was this article helpful?
Related Articles
Fix: AWS Lambda Unable to import module / Runtime.ImportModuleError
How to fix the AWS Lambda Runtime.ImportModuleError and Unable to import module error caused by wrong handler paths, missing dependencies, layer issues, and packaging problems.
Fix: Python TypeError: unhashable type: 'list'
Learn why Python raises TypeError unhashable type list, dict, or set and how to fix it when using dictionary keys, sets, groupby, dataclasses, and custom classes.
Fix: Django Forbidden (403) CSRF verification failed
How to fix Django 403 CSRF verification failed error caused by missing CSRF tokens, AJAX requests, cross-origin issues, HTTPS misconfig, and session problems.
Fix: FastAPI 422 Unprocessable Entity (validation error)
How to fix FastAPI 422 Unprocessable Entity error caused by wrong request body format, missing fields, type mismatches, query parameter errors, and Pydantic validation.