Fix: Nginx upstream timed out (110: Connection timed out) while reading response header
Quick Answer
How to fix Nginx upstream timed out error caused by slow backend responses, proxy timeout settings, PHP-FPM hangs, and upstream server configuration issues.
The Error
You check Nginx error logs and see:
upstream timed out (110: Connection timed out) while reading response header from upstreamOr variations:
upstream timed out (110: Connection timed out) while connecting to upstreamupstream timed out (110: Connection timed out) while reading upstreamupstream timed out (110: Operation timed out) while waiting for requestThe browser shows a 502 Bad Gateway or 504 Gateway Timeout error. Nginx sent the request to the backend (upstream) server, but the backend did not respond within the configured timeout.
Why This Happens
Nginx acts as a reverse proxy, forwarding requests to a backend application (Node.js, Python, PHP-FPM, Java, etc.). When the backend takes too long to respond, Nginx gives up and returns an error to the client.
Common causes:
- Slow backend response. The application is processing a heavy query, generating a report, or waiting on a slow external API.
- Backend is overloaded. Too many concurrent requests overwhelm the backend, causing response times to spike.
- Default timeout is too low. Nginx’s default proxy timeouts are 60 seconds, which may not be enough for long-running operations.
- Backend crashed or hung. The upstream process (PHP-FPM, Gunicorn, Node.js) is stuck or has crashed.
- Connection pool exhaustion. The backend has no available workers to handle the request.
- Network issues between Nginx and the backend. Firewall rules, DNS resolution failures, or network partitions.
Fix 1: Increase Proxy Timeouts
The most common fix. Increase the timeout values in your Nginx configuration:
location / {
proxy_pass http://backend;
proxy_connect_timeout 300s;
proxy_send_timeout 300s;
proxy_read_timeout 300s;
send_timeout 300s;
}What each timeout does:
| Directive | Default | Purpose |
|---|---|---|
proxy_connect_timeout | 60s | Time to establish connection to upstream |
proxy_send_timeout | 60s | Time between successive writes to upstream |
proxy_read_timeout | 60s | Time between successive reads from upstream |
send_timeout | 60s | Time between successive writes to client |
For long-running requests (file uploads, report generation), increase proxy_read_timeout since that is where the backend is processing:
location /api/reports {
proxy_pass http://backend;
proxy_read_timeout 600s; # 10 minutes for report generation
}After editing, test and reload:
nginx -t
sudo systemctl reload nginxPro Tip: Only increase timeouts for specific endpoints that need it, not globally. Setting a 10-minute timeout globally masks backend performance problems and allows slow requests to hold connections open unnecessarily.
Fix 2: Fix PHP-FPM Timeouts
If the upstream is PHP-FPM, you need to increase timeouts on both sides:
Nginx FastCGI timeouts:
location ~ \.php$ {
fastcgi_pass unix:/var/run/php/php-fpm.sock;
fastcgi_read_timeout 300s;
fastcgi_send_timeout 300s;
fastcgi_connect_timeout 300s;
include fastcgi_params;
}PHP-FPM pool settings (/etc/php/8.3/fpm/pool.d/www.conf):
request_terminate_timeout = 300PHP execution time (php.ini):
max_execution_time = 300Restart both after changes:
sudo systemctl restart php8.3-fpm
sudo systemctl reload nginxFix 3: Fix Backend Connection Issues
If the error says while connecting to upstream (not while reading), Nginx cannot reach the backend at all:
Check if the backend is running:
# For Gunicorn/Django
systemctl status gunicorn
# For Node.js with PM2
pm2 status
# For PHP-FPM
systemctl status php8.3-fpm
# For a generic process
ss -tlnp | grep <port>Check the upstream address:
# Is this correct?
upstream backend {
server 127.0.0.1:8000;
}Verify the backend is listening on the expected port:
curl -v http://127.0.0.1:8000/If the backend is not running, start it. If it is running but not responding, check its logs.
For general connection refused errors, see Fix: ERR_CONNECTION_REFUSED localhost.
Common Mistake: Configuring Nginx to proxy to
localhost:8000but the backend listens on127.0.0.1:8000(or vice versa on systems where localhost resolves to IPv6::1). Use127.0.0.1explicitly to avoid IPv4/IPv6 ambiguity.
Fix 4: Increase Backend Workers
The backend might have too few workers to handle incoming requests:
Gunicorn (Python):
gunicorn --workers 4 --timeout 120 myapp:appRule of thumb: workers = (2 * CPU cores) + 1.
uWSGI:
[uwsgi]
processes = 4
harakiri = 120PHP-FPM:
pm = dynamic
pm.max_children = 50
pm.start_servers = 5
pm.min_spare_servers = 5
pm.max_spare_servers = 35Node.js with clustering:
const cluster = require("cluster");
const os = require("os");
if (cluster.isPrimary) {
for (let i = 0; i < os.cpus().length; i++) {
cluster.fork();
}
}Fix 5: Add Upstream Keepalive
Enable keepalive connections between Nginx and the backend to reduce connection overhead:
upstream backend {
server 127.0.0.1:8000;
keepalive 32;
}
location / {
proxy_pass http://backend;
proxy_http_version 1.1;
proxy_set_header Connection "";
}keepalive 32 maintains up to 32 idle connections to the upstream. This reduces the time spent establishing new TCP connections for each request.
Fix 6: Configure Load Balancing
If you have multiple backend servers, Nginx can distribute load and handle failures:
upstream backend {
server 127.0.0.1:8001;
server 127.0.0.1:8002;
server 127.0.0.1:8003;
# Fail timeout and max failures
server 127.0.0.1:8001 max_fails=3 fail_timeout=30s;
}If one server is slow, Nginx tries the next one:
location / {
proxy_pass http://backend;
proxy_next_upstream error timeout http_502 http_503;
proxy_next_upstream_timeout 10s;
proxy_next_upstream_tries 3;
}proxy_next_upstream tells Nginx to try the next server when the current one returns an error or times out.
Fix 7: Add Request Buffering
Enable buffering to prevent the upstream from being held open while the client slowly receives the response:
location / {
proxy_pass http://backend;
proxy_buffering on;
proxy_buffer_size 4k;
proxy_buffers 8 16k;
proxy_busy_buffers_size 32k;
}With buffering, Nginx receives the entire response from the backend quickly, frees the upstream connection, and then sends the response to the slow client at its own pace.
For large responses (file downloads, exports):
proxy_max_temp_file_size 1024m;Fix 8: Monitor and Debug
Check Nginx error logs:
tail -f /var/log/nginx/error.logCheck backend access times in Nginx logs:
Add upstream response time to the log format:
log_format upstream_time '$remote_addr - $remote_user [$time_local] '
'"$request" $status $body_bytes_sent '
'"$http_referer" "$http_user_agent" '
'upstream_response_time=$upstream_response_time '
'request_time=$request_time';
access_log /var/log/nginx/access.log upstream_time;$upstream_response_time shows how long the backend took. If this is consistently close to proxy_read_timeout, your backend is too slow.
Check backend logs for the corresponding slow request:
# Gunicorn
journalctl -u gunicorn -f
# PHP-FPM
tail -f /var/log/php8.3-fpm.log
# Node.js PM2
pm2 logsStill Not Working?
If the error persists after increasing timeouts:
Check for DNS resolution issues. If the upstream uses a hostname, DNS resolution might be slow:
upstream backend {
server backend.local:8000;
}Add a resolver:
resolver 127.0.0.1 valid=30s;Check for SELinux blocking. On RHEL/CentOS, SELinux might block Nginx from connecting to the backend:
setsebool -P httpd_can_network_connect 1Check for file descriptor limits. Under heavy load, Nginx might run out of file descriptors:
worker_rlimit_nofile 65535;
events {
worker_connections 4096;
}Check for upstream connection limits. Some backends limit concurrent connections. Check the backend’s configuration for connection pool sizes.
For 502 Bad Gateway errors (where the backend returns an error rather than timing out), see Fix: Nginx 502 Bad Gateway. For 504 errors visible to the client, see Fix: Nginx 504 Gateway Timeout. For 403 errors, see Fix: Nginx 403 Forbidden.
Solo developer based in Japan. Every solution is cross-referenced with official documentation and tested before publishing.
Was this article helpful?
Related Articles
Fix: Nginx 504 Gateway Timeout
How to fix the Nginx 504 Gateway Timeout error by tuning proxy timeout settings, fixing unresponsive upstream servers, adjusting PHP-FPM timeouts, and debugging with error logs.
Fix: Nginx 403 Forbidden – Permission Denied or Directory Index Disabled
How to fix the Nginx 403 Forbidden error caused by file permissions, missing index files, SELinux, or incorrect root path configuration.
Fix: Nginx 502 Bad Gateway
How to fix Nginx 502 Bad Gateway errors caused by upstream server issues, wrong proxy_pass configuration, PHP-FPM socket problems, timeout settings, SELinux, Docker networking, and more.
Fix: Cannot Connect to the Docker Daemon. Is the Docker Daemon Running?
How to fix the 'Cannot connect to the Docker daemon' error on Linux, macOS, and Windows, including Docker Desktop, systemctl, WSL2, and Docker context issues.