Fix: Uvicorn Not Working — Worker Errors, Reload Issues, and Production Deployment
Quick Answer
How to fix Uvicorn errors — Address already in use port binding, reload not detecting changes, SSL certificate errors, worker class with gunicorn, WebSocket disconnect, graceful shutdown, and proxy headers behind nginx.
The Error
You start Uvicorn and port 8000 is already taken:
ERROR: [Errno 48] error while attempting to bind on address ('0.0.0.0', 8000):
address already in useOr auto-reload doesn’t detect changes to your code:
uvicorn main:app --reload
# Edit main.py, save, reload doesn't triggerOr you deploy with multiple workers and the app breaks in ways it didn’t locally:
uvicorn main:app --workers 4
# Memory usage × 4, in-memory state lost between requestsOr connections behind a reverse proxy have wrong client IPs:
@app.get("/")
def index(request: Request):
return {"client": request.client.host}
# Returns 127.0.0.1 (the proxy) instead of the real user IPOr SSL/TLS setup fails with cryptic errors:
SSLError: [SSL: NO_PRIVATE_KEY_ASSIGNED] no private key assignedUvicorn is the standard ASGI server for modern Python web apps (FastAPI, Starlette, Quart). It’s lightning-fast (built on uvloop and httptools) but production deployment involves several decisions — worker count, reload settings, proxy headers, TLS — that each have their own failure modes.
Why This Happens
Uvicorn is an ASGI server — it speaks the async Python web protocol, different from WSGI (Flask, Django). It wraps a single async event loop per process. The --workers flag spawns multiple processes (each with its own event loop), but those processes don’t share memory — anything stored in app-level variables is independent per worker.
Auto-reload watches your source tree for file changes. It only reloads the app, not Uvicorn itself, and has specific rules about which files it tracks. Behind a reverse proxy, Uvicorn sees the proxy’s IP as the client unless you tell it to trust forwarded headers — this breaks IP-based rate limiting, analytics, and geolocation.
Fix 1: Port Already in Use
[Errno 48] error while attempting to bind on address ('0.0.0.0', 8000)Another process is already bound to port 8000. Find and kill it, or use a different port.
Find the process using port 8000:
# Linux / macOS
lsof -i :8000
# COMMAND PID USER ...
# python 12345 user ...
kill -9 12345
# Or one-liner
kill $(lsof -ti :8000)Windows PowerShell:
Get-NetTCPConnection -LocalPort 8000 | Select-Object OwningProcess
Stop-Process -Id <pid>Use a different port:
uvicorn main:app --port 8001Most common cause — you crashed an earlier run and Python left the process running. Ctrl+C should clean up, but a stuck process needs the manual kill.
For general port conflict patterns, see port 3000 already in use.
Fix 2: Auto-Reload and Development Mode
uvicorn main:app --reload--reload rules:
- Watches the current working directory (and imported modules under it) by default
- Only triggers on
.pyfile changes (plus a few extensions like.yaml) - Ignores
venv/,__pycache__/,.git/, etc.
Reload not triggering — common causes:
- Editing a file outside the reload directory:
# Add directories to watch
uvicorn main:app --reload --reload-dir ./src --reload-dir ./config
# Extend watched extensions
uvicorn main:app --reload --reload-include "*.yaml" --reload-include "*.html"- Editor saves creating temp files — some editors (vim with swap files) confuse the watcher. Check with explicit save:
touch main.py # Manually trigger a file event- Running under Docker with volume mounts on macOS — file events may not propagate. Use polling:
uvicorn main:app --reload --reload-include "*.py" --reload-delay 1.0Never use --reload in production — it’s slow, uses more memory, and restart-on-change is unexpected server behavior.
--workers doesn’t work with --reload:
uvicorn main:app --reload --workers 4 # Warning: workers ignored with reloadReload mode is single-process by design.
Common Mistake: Deploying to production with --reload still enabled (from copy-pasting the dev command). Production apps should run without reload; use a process manager (systemd, supervisord) to restart on crashes, and deploy new code via container rebuilds or graceful worker reloads.
Fix 3: Multiple Workers for Production
uvicorn main:app --workers 4 --host 0.0.0.0 --port 8000--workers N spawns N Uvicorn processes, each with its own async event loop. Rule of thumb: (2 × CPU_cores) + 1, tuned to your actual workload.
Workers don’t share memory:
# WRONG — state is per-worker
from fastapi import FastAPI
app = FastAPI()
# Each worker has its own counter
counter = 0
@app.post("/increment")
def increment():
global counter
counter += 1
return {"count": counter}
# Different workers return different countsCORRECT — use shared external state (Redis, DB, etc.):
from fastapi import FastAPI
import redis
app = FastAPI()
r = redis.Redis()
@app.post("/increment")
def increment():
count = r.incr("counter") # Atomic across workers
return {"count": count}For Redis-specific connection issues when sharing state across workers, see Redis connection refused.
Use Gunicorn as the process manager — better signal handling for production:
pip install gunicorn
gunicorn main:app \
--worker-class uvicorn.workers.UvicornWorker \
--workers 4 \
--bind 0.0.0.0:8000 \
--timeout 60 \
--graceful-timeout 30Or with the newer UvicornH11Worker (if you need pure-Python H1 handling):
gunicorn main:app --worker-class uvicorn.workers.UvicornH11Worker --workers 4Pro Tip: uvicorn --workers 4 is fine for small deployments. For anything serious, use Gunicorn with UvicornWorker — it handles worker lifecycle, graceful restarts, and worker timeouts more robustly than Uvicorn’s built-in multi-process mode. The performance is identical; the operational ergonomics are much better.
Fix 4: SSL/TLS Setup
uvicorn main:app \
--ssl-keyfile /path/to/key.pem \
--ssl-certfile /path/to/cert.pem \
--host 0.0.0.0 \
--port 443Common SSL errors:
SSLError: [SSL: NO_PRIVATE_KEY_ASSIGNED] no private key assignedUsually means --ssl-keyfile wasn’t provided or the file is unreadable.
SSL_ERROR_NO_CYPHER_OVERLAPClient and server can’t agree on a cipher suite. Usually the cert uses an unsupported algorithm or the client is too old.
Self-signed cert for development:
openssl req -x509 -newkey rsa:4096 -keyout key.pem -out cert.pem \
-sha256 -days 365 -nodes \
-subj "/CN=localhost"
uvicorn main:app --ssl-keyfile key.pem --ssl-certfile cert.pem --port 8443Production: terminate SSL at the load balancer, not Uvicorn. It’s simpler, more secure, and lets you rotate certs without restarting the app:
[HTTPS client] → [Nginx/ALB with TLS] → [Uvicorn HTTP on :8000]For nginx-specific SSL handshake issues, see nginx SSL handshake failed.
Fix 5: Behind a Reverse Proxy — Forwarded Headers
@app.get("/ip")
def get_ip(request: Request):
return {"client_ip": request.client.host}
# Returns 127.0.0.1 (the proxy) when running behind nginx/ALBUvicorn doesn’t trust X-Forwarded-For headers by default. Enable proxy headers:
uvicorn main:app \
--host 0.0.0.0 \
--port 8000 \
--proxy-headers \
--forwarded-allow-ips="*"--forwarded-allow-ips accepts a comma-separated list of trusted proxy IPs. "*" trusts all (only safe when Uvicorn is not directly exposed to the internet):
# Trust specific proxy IPs
--forwarded-allow-ips="10.0.0.1,10.0.0.2"How it changes behavior:
# Without --proxy-headers
request.client.host → "10.0.0.1" (proxy IP)
request.url.scheme → "http"
# With --proxy-headers
request.client.host → "203.0.113.42" (real client IP from X-Forwarded-For)
request.url.scheme → "https" (from X-Forwarded-Proto)Required nginx configuration to forward the headers:
server {
listen 443 ssl;
location / {
proxy_pass http://127.0.0.1:8000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}Common Mistake: Enabling --proxy-headers on a directly-exposed Uvicorn (not behind a proxy). Attackers can then spoof their IP by sending X-Forwarded-For themselves. Only enable when traffic actually comes through a trusted proxy.
Fix 6: Graceful Shutdown and Signal Handling
When you deploy new code, you want existing requests to complete before the worker shuts down.
from fastapi import FastAPI
import asyncio
import signal
app = FastAPI()
@app.on_event("startup")
async def startup():
# Open DB connections, warm caches, etc.
print("App starting")
@app.on_event("shutdown")
async def shutdown():
# Close DB pools, flush logs
print("App shutting down — cleaning up")Gunicorn graceful timeout:
gunicorn main:app \
--worker-class uvicorn.workers.UvicornWorker \
--workers 4 \
--timeout 60 \
--graceful-timeout 30 \
--bind 0.0.0.0:8000--timeout 60— kill worker if it doesn’t respond to heartbeats--graceful-timeout 30— when shutting down, give active requests 30 seconds to finish
FastAPI lifespan context (preferred over on_event):
from contextlib import asynccontextmanager
from fastapi import FastAPI
@asynccontextmanager
async def lifespan(app: FastAPI):
# Startup
db_pool = await create_pool()
app.state.db = db_pool
yield
# Shutdown
await db_pool.close()
app = FastAPI(lifespan=lifespan)The on_event decorators are deprecated in FastAPI; use lifespan for new code.
For FastAPI dependency lifecycle issues that interact with Uvicorn workers, see FastAPI dependency injection error.
Fix 7: WebSockets
Uvicorn handles WebSockets natively — but disconnections and concurrency need care.
from fastapi import FastAPI, WebSocket, WebSocketDisconnect
app = FastAPI()
@app.websocket("/ws")
async def websocket_endpoint(websocket: WebSocket):
await websocket.accept()
try:
while True:
data = await websocket.receive_text()
await websocket.send_text(f"Echo: {data}")
except WebSocketDisconnect:
print("Client disconnected")Common WebSocket errors:
WebSocketDisconnect: code=1006 (no close frame)Client connection dropped without a close handshake — network issue, timeout, or proxy cutting the connection.
Configure nginx for WebSocket support:
location /ws {
proxy_pass http://127.0.0.1:8000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_read_timeout 86400; # 24 hours
}Without the Upgrade header, nginx proxies as HTTP/1.1 and drops the connection before the WebSocket handshake completes.
For WebSocket proxy issues in nginx, see nginx websocket proxy not working.
Broadcasting to multiple clients requires a connection manager (Uvicorn is per-worker, so cross-worker broadcasts need Redis pub/sub):
from fastapi import FastAPI, WebSocket, WebSocketDisconnect
app = FastAPI()
class ConnectionManager:
def __init__(self):
self.active: list[WebSocket] = []
async def connect(self, ws: WebSocket):
await ws.accept()
self.active.append(ws)
def disconnect(self, ws: WebSocket):
self.active.remove(ws)
async def broadcast(self, message: str):
for ws in self.active:
await ws.send_text(message)
manager = ConnectionManager()
@app.websocket("/ws")
async def ws(websocket: WebSocket):
await manager.connect(websocket)
try:
while True:
data = await websocket.receive_text()
await manager.broadcast(data)
except WebSocketDisconnect:
manager.disconnect(websocket)Fix 8: Logging and Debugging
uvicorn main:app --log-level debug # debug, info, warning, error, critical
uvicorn main:app --access-log # Print access log (default on)
uvicorn main:app --no-access-log # Suppress access log (quieter prod logs)Custom logging configuration:
import logging
logging.config.dictConfig({
"version": 1,
"disable_existing_loggers": False,
"formatters": {
"default": {
"format": "%(asctime)s [%(levelname)s] %(name)s: %(message)s",
},
},
"handlers": {
"console": {
"class": "logging.StreamHandler",
"formatter": "default",
"stream": "ext://sys.stdout",
},
},
"root": {
"level": "INFO",
"handlers": ["console"],
},
"loggers": {
"uvicorn": {"level": "INFO"},
"uvicorn.error": {"level": "INFO"},
"uvicorn.access": {"level": "INFO"},
},
})Via CLI with a YAML config:
uvicorn main:app --log-config logging.yamlAccess log fields — customize the format:
uvicorn main:app --access-log --log-config custom-logging.jsonDebugging slow endpoints:
import time
from fastapi import FastAPI, Request
app = FastAPI()
@app.middleware("http")
async def log_time(request: Request, call_next):
start = time.perf_counter()
response = await call_next(request)
elapsed = time.perf_counter() - start
if elapsed > 1.0:
print(f"SLOW: {request.url.path} took {elapsed:.2f}s")
return responseStill Not Working?
Uvicorn vs Gunicorn vs Hypercorn
- Uvicorn — Fastest ASGI server, simple built-in worker mode. Best for small/medium deployments.
- Gunicorn + UvicornWorker — Production-grade process manager with graceful restart. Recommended for production.
- Hypercorn — HTTP/2 and HTTP/3 support. Slower than Uvicorn for HTTP/1.1.
Testing with Uvicorn
import pytest
from fastapi.testclient import TestClient
from main import app
client = TestClient(app) # Uses Uvicorn internally
def test_endpoint():
response = client.get("/")
assert response.status_code == 200For pytest fixture patterns with FastAPI/Uvicorn testing, see pytest fixture not found.
Docker Deployment
FROM python:3.12-slim
WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
EXPOSE 8000
# Production — no --reload, explicit host binding
CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000", "--workers", "4"]Don’t use --workers in Docker containers for Kubernetes — let Kubernetes scale replicas instead. One worker per container keeps horizontal scaling clean:
CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000"]# deployment.yaml
replicas: 4Health Check Endpoint
Every production deployment needs a health check. Add a simple endpoint:
from fastapi import FastAPI
app = FastAPI()
@app.get("/health")
async def health():
return {"status": "ok"}
@app.get("/health/ready")
async def ready():
# Check DB, dependencies
return {"status": "ready"}Configure Kubernetes or your load balancer to hit /health for liveness and /health/ready for readiness.
Solo developer based in Japan. Every solution is cross-referenced with official documentation and tested before publishing.
Was this article helpful?
Related Articles
Fix: Gunicorn Not Working — Worker Timeout, Boot Errors, and Signal Handling
How to fix Gunicorn errors — WORKER TIMEOUT killed, ImportError cannot import app, worker class not found, connection refused 502 behind nginx, graceful reload not working, and sync vs async worker selection.
Fix: httpx Not Working — Async Client, Timeout, and Connection Pool Errors
How to fix httpx errors — RuntimeError event loop is closed, ReadTimeout exception, ConnectionResetError, async client not closing properly, HTTP/2 not enabled, SSL verify failed, and proxy not working.
Fix: ONNX Not Working — Conversion Errors, Runtime Provider Issues, and Dynamic Shape Problems
How to fix ONNX errors — torch.onnx.export unsupported operator, ONNX Runtime CUDA provider not found, InvalidArgument input shape mismatch, dynamic axes not working, IR version mismatch, and opset version conflicts.
Fix: FastAPI BackgroundTasks Not Working — Task Not Running or Dependency Errors
How to fix FastAPI BackgroundTasks — task not executing, dependency injection in tasks, error handling, Celery for heavy tasks, and lifespan-managed background workers.