Fix: Celery Task Not Received or Not Executing
Quick Answer
Fix Celery tasks not being received or executed by resolving broker connections, autodiscovery issues, task name mismatches, and worker configuration.
The Error
You send a Celery task, but nothing happens. The task either never appears in the worker output, or you see:
Received unregistered task of type 'myapp.tasks.process_order'Or the task is sent successfully but the worker shows no activity:
result = process_order.delay(order_id=123)
print(result.id) # Returns a task ID
# But the worker never picks it upThe worker just sits idle, or the task gets silently dropped.
Why This Happens
Celery has multiple components that must all be configured correctly: the broker (Redis or RabbitMQ) that holds messages, the worker that processes them, and the application that sends them. A failure at any point in this chain causes tasks to be lost or ignored.
Common causes include the worker not connecting to the same broker as the sender, task names not matching between sender and worker, the autodiscovery mechanism not finding your task modules, or the worker listening on a different queue.
Fix 1: Verify Broker Connection
Both the sender and worker must connect to the same broker. Check your Celery configuration:
# celery.py or config
app = Celery('myproject')
app.config_from_object('django.conf:settings', namespace='CELERY')
# settings.py
CELERY_BROKER_URL = 'redis://localhost:6379/0'Test the broker connection:
from celery import Celery
app = Celery('test', broker='redis://localhost:6379/0')
print(app.connection().ensure_connection(max_retries=3))If using Redis, verify it’s running:
redis-cli ping # Should return PONGCommon mistakes:
- Using
redis://localhostin the app but the worker is on a different machine - Using different Redis databases (
/0vs/1) - The broker URL is set in environment variables that aren’t loaded in the worker’s environment
Fix 2: Fix Task Autodiscovery
Celery’s autodiscover_tasks() finds tasks automatically, but it needs to know where to look:
# celery.py
app = Celery('myproject')
app.config_from_object('django.conf:settings', namespace='CELERY')
app.autodiscover_tasks() # Searches INSTALLED_APPS for tasks.pyFor Django, this searches each app in INSTALLED_APPS for a tasks.py file. If your tasks are in a different file, they won’t be discovered:
# This is found automatically
myapp/tasks.py
# This is NOT found automatically
myapp/celery_tasks.py
myapp/workers/tasks.pyTo include non-standard task locations:
app.autodiscover_tasks(['myapp.workers', 'myapp.background'])Or register tasks explicitly:
# Include specific modules
app.conf.include = ['myapp.workers.tasks', 'myapp.background.jobs']Pro Tip: Run
celery -A myproject inspect registeredto see all tasks the worker knows about. If your task isn’t in the list, it hasn’t been discovered. Compare this list with what your sender expects.
Fix 3: Fix Task Name Mismatches
Celery generates task names based on the module path. If the sender and worker import tasks differently, the names won’t match:
# If the task is defined in myapp/tasks.py
@app.task
def process_order(order_id):
pass
# Celery names it: 'myapp.tasks.process_order'Problems arise when:
# Sender uses absolute import
from myapp.tasks import process_order
# Task name: 'myapp.tasks.process_order'
# But worker's autodiscovery finds it as
# Task name: 'tasks.process_order'Set an explicit name to avoid ambiguity:
@app.task(name='process_order')
def process_order(order_id):
passOr use send_task with the exact name:
app.send_task('myapp.tasks.process_order', args=[123])Check registered task names on the worker:
celery -A myproject inspect registeredFix 4: Verify the Worker Is Running
This sounds obvious, but it’s a common issue. The worker must be running and connected:
# Start the worker
celery -A myproject worker --loglevel=info
# Check if it's running
celery -A myproject inspect pingThe worker output should show:
[config]
.> app: myproject:0x...
.> transport: redis://localhost:6379/0
.> results: redis://localhost:6379/0
.> concurrency: 4 (prefork)
[queues]
.> celery exchange=celery(direct) key=celeryIf you see no output or connection errors, the worker can’t reach the broker. Check the broker URL and network connectivity.
For production, run the worker as a systemd service:
# /etc/systemd/system/celery.service
[Unit]
Description=Celery Worker
After=network.target
[Service]
Type=forking
User=celery
WorkingDirectory=/opt/myproject
ExecStart=/opt/myproject/venv/bin/celery -A myproject multi start worker1 \
--loglevel=info --logfile=/var/log/celery/%n%I.log
ExecStop=/opt/myproject/venv/bin/celery multi stopwait worker1Fix 5: Fix Queue Routing
If the task is routed to a specific queue but the worker listens on the default queue, the task is never picked up:
# Task routed to 'priority' queue
@app.task(queue='priority')
def urgent_task():
pass
# Or via routing configuration
app.conf.task_routes = {
'myapp.tasks.urgent_task': {'queue': 'priority'},
'myapp.tasks.*': {'queue': 'default'},
}The worker must explicitly consume from that queue:
# Only listens to 'celery' (default) queue
celery -A myproject worker
# Listen to specific queues
celery -A myproject worker -Q celery,priority
# Listen to all queues
celery -A myproject worker -Q celery,priority,defaultCheck which queues have pending messages:
# Redis
redis-cli llen celery # Default queue
redis-cli llen priority # Custom queue
# RabbitMQ
rabbitmqctl list_queues name messagesCommon Mistake: Sending a task to a queue that no worker consumes from. The messages pile up in the broker but are never processed. Always verify that at least one worker is listening on every queue you route tasks to.
Fix 6: Fix Serialization Issues
Celery serializes task arguments before sending them to the broker. If the arguments can’t be serialized, the task fails silently or raises an error:
# This fails with JSON serializer (default in Celery 4+)
@app.task
def process(data):
pass
process.delay(data=datetime.now()) # datetime isn't JSON serializableFix by converting arguments to serializable types:
process.delay(data=datetime.now().isoformat())Or change the serializer:
app.conf.task_serializer = 'pickle' # Handles more types
app.conf.accept_content = ['pickle', 'json']Warning: Using pickle is a security risk if your broker is accessible to untrusted clients. Stick with JSON and convert your data to serializable formats.
Check serialization compatibility:
import json
try:
json.dumps(your_task_arguments)
except TypeError as e:
print(f"Not JSON serializable: {e}")Fix 7: Disable task_always_eager for Production
task_always_eager runs tasks synchronously in the current process, bypassing the broker entirely. It’s useful for testing but causes confusion in production:
# settings.py
CELERY_TASK_ALWAYS_EAGER = True # Tasks run immediately, no worker neededIf this is enabled in production, tasks run in the web process and never reach the worker. Disable it:
# settings.py
CELERY_TASK_ALWAYS_EAGER = False # Default
CELERY_TASK_EAGER_PROPAGATES = FalseFor testing, use task_always_eager only in test settings:
# test_settings.py
CELERY_TASK_ALWAYS_EAGER = True
CELERY_TASK_EAGER_PROPAGATES = True # Propagate exceptions in testsFix 8: Tune Concurrency and Prefetch Settings
Workers may appear to not process tasks if they’re stuck on long-running tasks and the prefetch limit prevents new tasks from being fetched:
# Worker takes all available tasks but processes slowly
app.conf.worker_prefetch_multiplier = 4 # Default: fetches 4 × concurrency tasksFor long-running tasks, reduce prefetch:
app.conf.worker_prefetch_multiplier = 1 # Fetch one task at a time per worker processOr for tasks with unpredictable duration:
celery -A myproject worker --concurrency=4 -OfairThe -Ofair flag distributes tasks to workers that are actually free, rather than prefetching to all workers equally.
Check worker status:
celery -A myproject inspect active # Currently executing
celery -A myproject inspect reserved # Prefetched, waiting
celery -A myproject inspect stats # General statisticsStill Not Working?
Check the result backend. If you’re using
result.get()and the result backend isn’t configured, the call hangs forever. SetCELERY_RESULT_BACKENDor useresult.get(timeout=10).Look for import errors. If a task module has an import error, the worker skips it silently. Check worker startup output for import warnings.
Verify environment variables. If your tasks depend on env vars, ensure they’re available in the worker’s environment, not just the web process.
Check for task rate limits.
@app.task(rate_limit='10/m')limits execution to 10 per minute. Tasks beyond the limit are queued but delayed.Monitor with Flower. Install
flower(pip install flower) and runcelery -A myproject flowerfor a web dashboard showing task status, worker health, and queue depths.Check broker visibility timeout. For Redis, if a task takes longer than the visibility timeout (1 hour default), Redis redelivers it. Set
broker_transport_options = {'visibility_timeout': 43200}for long tasks.
Solo developer based in Japan. Every solution is cross-referenced with official documentation and tested before publishing.
Was this article helpful?
Related Articles
Fix: Pandas SettingWithCopyWarning
Learn how to fix the Pandas SettingWithCopyWarning by using .loc[], .copy(), and avoiding chained indexing in your DataFrame operations.
Fix: AWS Lambda Unable to import module / Runtime.ImportModuleError
How to fix the AWS Lambda Runtime.ImportModuleError and Unable to import module error caused by wrong handler paths, missing dependencies, layer issues, and packaging problems.
Fix: Python TypeError: unhashable type: 'list'
Learn why Python raises TypeError unhashable type list, dict, or set and how to fix it when using dictionary keys, sets, groupby, dataclasses, and custom classes.
Fix: Python ValueError: Too Many Values to Unpack
Learn why Python raises ValueError too many values to unpack and how to fix it when unpacking tuples, iterating dictionaries, parsing files, and using zip or enumerate.