Fix: Celery Beat Not Working — Scheduled Tasks Not Running or Beat Not Starting
Quick Answer
How to fix Celery Beat issues — beat scheduler not starting, tasks not executing on schedule, timezone configuration, database scheduler, and running beat with workers.
The Problem
Celery Beat is running but scheduled tasks never execute:
celery -A myapp beat -l info
# [2026-03-26 10:00:00,000: INFO/MainProcess] beat: Starting...
# [2026-03-26 10:00:00,001: INFO/MainProcess] Scheduler: Sending due task send-weekly-report (myapp.tasks.send_weekly_report)
# But the task never appears in worker logsOr Beat starts but immediately exits:
ERROR/MainProcess] beat: ERROR: Another beat is already running!
ValueError: not enough values to unpackOr tasks run at the wrong time despite setting a schedule:
CELERY_BEAT_SCHEDULE = {
'daily-report': {
'task': 'myapp.tasks.send_report',
'schedule': crontab(hour=9, minute=0),
# Expected: 9:00 AM — but runs at a different time
},
}Why This Happens
Celery Beat is a scheduler that sends tasks to the queue — it doesn’t execute them itself. Common failure modes:
- No worker running — Beat sends tasks to the broker queue, but if no Celery worker is consuming that queue, tasks pile up unexecuted.
- Beat and worker running in the same process — running
celery worker -Bstarts both, but this is only recommended for development. In production, they should be separate processes. celerybeat-schedulefile conflict — Beat stores its schedule state in a file (celerybeat-schedule). If two Beat processes start, the second fails with “Another beat is already running.”- Timezone mismatch — if
CELERY_TIMEZONEdoesn’t match the server’s system timezone,crontabschedules run at unexpected times. - Task not registered — if the task module isn’t imported when Beat starts, the task won’t be found. Celery must discover the task before Beat can schedule it.
Fix 1: Run Beat and Worker as Separate Processes
Beat schedules tasks; workers execute them. Both must be running:
# Terminal 1 — start the worker (executes tasks)
celery -A myapp worker -l info
# Terminal 2 — start beat (schedules tasks)
celery -A myapp beat -l info
# Development shortcut — run both in one process (not for production)
celery -A myapp worker --beat -l info
# Or:
celery -A myapp worker -B -l infoProduction setup with Supervisor:
; /etc/supervisor/conf.d/celery.conf
[program:celery-worker]
command=/venv/bin/celery -A myapp worker -l info --concurrency=4
directory=/app
user=celery
autostart=true
autorestart=true
stderr_logfile=/var/log/celery/worker.err.log
stdout_logfile=/var/log/celery/worker.out.log
[program:celery-beat]
command=/venv/bin/celery -A myapp beat -l info --scheduler django_celery_beat.schedulers:DatabaseScheduler
directory=/app
user=celery
autostart=true
autorestart=true
stderr_logfile=/var/log/celery/beat.err.log
stdout_logfile=/var/log/celery/beat.out.logDocker Compose:
# docker-compose.yml
services:
redis:
image: redis:7-alpine
worker:
build: .
command: celery -A myapp worker -l info
environment:
- CELERY_BROKER_URL=redis://redis:6379/0
depends_on:
- redis
beat:
build: .
command: celery -A myapp beat -l info --scheduler django_celery_beat.schedulers:DatabaseScheduler
environment:
- CELERY_BROKER_URL=redis://redis:6379/0
depends_on:
- redis
- workerWarning: Only run ONE Beat instance at a time. Running multiple Beat instances causes duplicate task execution and the “Another beat is already running” error.
Fix 2: Configure CELERY_BEAT_SCHEDULE Correctly
Define the schedule in your Celery config:
# celery.py
from celery import Celery
from celery.schedules import crontab
app = Celery('myapp')
app.config_from_object('django.conf:settings', namespace='CELERY')
# Load tasks from all registered apps
app.autodiscover_tasks()
# Define the beat schedule
app.conf.beat_schedule = {
# Run every 30 seconds
'check-new-orders': {
'task': 'orders.tasks.check_new_orders',
'schedule': 30.0,
},
# Run every 5 minutes
'sync-inventory': {
'task': 'inventory.tasks.sync_inventory',
'schedule': crontab(minute='*/5'),
},
# Run daily at 9:00 AM
'send-daily-report': {
'task': 'reports.tasks.send_daily_report',
'schedule': crontab(hour=9, minute=0),
'args': (),
'kwargs': {'format': 'pdf'},
},
# Run on weekdays at 8:00 AM
'morning-standup': {
'task': 'notifications.tasks.send_standup',
'schedule': crontab(hour=8, minute=0, day_of_week='mon-fri'),
},
# Run on the 1st of every month at midnight
'monthly-billing': {
'task': 'billing.tasks.process_monthly_billing',
'schedule': crontab(day_of_month=1, hour=0, minute=0),
},
}crontab parameter reference:
from celery.schedules import crontab
crontab(minute=0, hour=0) # Midnight daily
crontab(minute='*/15') # Every 15 minutes
crontab(hour='*/2', minute=0) # Every 2 hours on the hour
crontab(day_of_week='monday') # Every Monday at midnight
crontab(day_of_week='0,6') # Saturday and Sunday
crontab(day_of_month='1,15') # 1st and 15th of each month
crontab(month_of_year='*/3') # Every quarter (Jan, Apr, Jul, Oct)
# Complex: 9am–5pm, Monday–Friday, every 30 min
crontab(hour='9-17', minute='*/30', day_of_week='mon-fri')Fix 3: Fix Timezone Issues
Beat uses UTC by default. If your crontab should run in a local timezone:
# settings.py (Django) or celeryconfig.py
# Correct — set timezone explicitly
CELERY_TIMEZONE = 'America/New_York' # Or 'Europe/London', 'Asia/Tokyo', etc.
CELERY_ENABLE_UTC = True # Keep UTC internally, display in CELERY_TIMEZONE
# Django's TIME_ZONE must also be set for Django-Celery-Beat
TIME_ZONE = 'America/New_York'
USE_TZ = True# celery.py
app.conf.update(
timezone='America/New_York',
enable_utc=True,
)Verify timezone handling:
# Check what timezone Beat thinks it is
celery -A myapp inspect clock
# Check worker timezone
celery -A myapp inspect conf | grep timezoneNote: When in doubt, schedule everything in UTC and convert in the application code. This avoids daylight saving time bugs entirely.
Fix 4: Use django-celery-beat for Dynamic Schedules
The default file-based scheduler requires a code deploy to change schedules. django-celery-beat stores schedules in the database, allowing changes without restarts:
pip install django-celery-beat# settings.py
INSTALLED_APPS = [
...
'django_celery_beat',
]python manage.py migrate # Creates the beat schedule tables# Start beat with the database scheduler
celery -A myapp beat -l info \
--scheduler django_celery_beat.schedulers:DatabaseSchedulerManage schedules via Django admin or programmatically:
from django_celery_beat.models import PeriodicTask, CrontabSchedule
import json
# Create a crontab schedule
schedule, created = CrontabSchedule.objects.get_or_create(
hour=9,
minute=0,
day_of_week='*',
day_of_month='*',
month_of_year='*',
timezone='America/New_York',
)
# Create the periodic task
PeriodicTask.objects.update_or_create(
name='Daily Report',
defaults={
'task': 'reports.tasks.send_daily_report',
'crontab': schedule,
'args': json.dumps([]),
'kwargs': json.dumps({'format': 'pdf'}),
'enabled': True,
},
)Fix 5: Ensure Tasks Are Discoverable
Beat can only schedule tasks that Celery has discovered and registered:
# myapp/__init__.py — ensures Celery app is loaded with Django
from .celery import app as celery_app
__all__ = ('celery_app',)# celery.py
from celery import Celery
app = Celery('myapp')
app.config_from_object('django.conf:settings', namespace='CELERY')
# Auto-discover tasks in all INSTALLED_APPS
app.autodiscover_tasks() # Looks for tasks.py in each installed app# tasks.py — task must use @shared_task or @app.task
from celery import shared_task
@shared_task
def send_daily_report(format='pdf'):
# Task logic here
passVerify tasks are registered:
# List all registered tasks
celery -A myapp inspect registered
# Or start a worker and check
celery -A myapp worker -l debug 2>&1 | grep "Registered tasks"Fix 6: Monitor Beat with Flower
Use Flower to monitor task execution and verify Beat is sending tasks:
pip install flower
# Start Flower
celery -A myapp flower --port=5555
# Open http://localhost:5555
# Tasks tab: see queued, active, completed tasks
# Monitor tab: see when tasks were received vs executedCheck Beat is producing tasks from logs:
# Beat log shows tasks being sent
celery -A myapp beat -l info
# INFO: Scheduler: Sending due task send-daily-report (reports.tasks.send_daily_report)
# Worker log shows tasks being received and executed
celery -A myapp worker -l info
# INFO: Received task: reports.tasks.send_daily_report[uuid]
# INFO: Task reports.tasks.send_daily_report[uuid] succeeded in 1.23s: None
# If Beat sends but worker doesn't receive: check broker connectivity
# If worker receives but task fails: check task code and logsStill Not Working?
celerybeat-schedule file is stale — delete the celerybeat-schedule file (or celerybeat-schedule.db for the shelve scheduler) and restart Beat. This file stores the last run time for each task. A corrupted or outdated file can prevent tasks from running.
Tasks queued but not executed — Beat puts tasks on the queue. If the queue fills up (broker storage full, worker crashed), tasks pile up. Check the queue depth: celery -A myapp inspect active_queues and celery -A myapp inspect reserved.
Beat runs tasks at startup then stops — this happens when Beat’s schedule file is empty or the task has never run before. Beat calculates the next run time from the last run. On first start, it runs immediately, then waits for the next scheduled time. This is expected behavior.
Multiple Beat instances in Kubernetes — if your Beat pod restarts or you run multiple replicas, multiple Beat instances will run simultaneously, causing duplicate task execution. Always run exactly one Beat pod: set replicas: 1 and avoid HorizontalPodAutoscaler for the Beat deployment.
For related Celery issues, see Fix: Celery Task Not Executing and Fix: Celery Task Not Received.
Solo developer based in Japan. Every solution is cross-referenced with official documentation and tested before publishing.
Was this article helpful?
Related Articles
Fix: Celery Task Not Executing — Worker Not Processing Tasks
How to fix Celery tasks not executing — worker configuration, broker connection issues, task routing, serialization errors, and debugging stuck or lost tasks.
Fix: Python Packaging Not Working — Build Fails, Package Not Found After Install, or PyPI Upload Errors
How to fix Python packaging issues — pyproject.toml setup, build backends (setuptools/hatchling/flit), wheel vs sdist, editable installs, package discovery, and twine upload to PyPI.
Fix: Kafka Consumer Not Receiving Messages, Connection Refused, and Rebalancing Errors
How to fix Apache Kafka issues — consumer not receiving messages, auto.offset.reset, Docker advertised.listeners, max.poll.interval.ms rebalancing, MessageSizeTooLargeException, and KafkaJS errors.
Fix: Docker Secrets Not Working — BuildKit --secret Not Mounting, Compose Secrets Undefined, or Secret Leaking into Image
How to fix Docker secrets — BuildKit secret mounts in Dockerfile, docker-compose secrets config, runtime vs build-time secrets, environment variable alternatives, and verifying secrets don't leak into image layers.