Skip to content

Fix: Nginx 504 Gateway Timeout

FixDevs ·

Quick Answer

How to fix the Nginx 504 Gateway Timeout error by tuning proxy timeout settings, fixing unresponsive upstream servers, adjusting PHP-FPM timeouts, and debugging with error logs.

The Error

You hit a page or API endpoint behind Nginx and get:

504 Gateway Timeout

Your browser shows a blank page with that message. In the Nginx error log (/var/log/nginx/error.log), you see something like:

upstream timed out (110: Connection timed out) while reading response header from upstream

Or one of these variations:

upstream timed out (110: Connection timed out) while connecting to upstream
upstream timed out (110: Operation timed out) while reading upstream

The request never completes. Nginx gave up waiting for your backend to respond.

Why This Happens

Nginx acts as a reverse proxy. It receives the client’s request, forwards it to an upstream server (Node.js, Python, PHP-FPM, Java, etc.), and waits for a response. A 504 Gateway Timeout means Nginx waited too long and gave up.

This happens for one of these reasons:

  • The upstream server is too slow. A heavy database query, an unoptimized API call, or a large file processing job takes longer than Nginx’s timeout window.
  • The upstream server is down or unresponsive. The process crashed, is stuck in a deadlock, or ran out of memory.
  • Nginx timeout values are too low. The default proxy_read_timeout is 60 seconds. If your backend legitimately needs more time, Nginx cuts the connection.
  • Network issues between Nginx and the upstream. Firewall rules, DNS resolution delays, or saturated network interfaces can slow the connection.
  • The upstream server’s connection pool is exhausted. All worker threads or processes are busy, so new requests queue up and eventually time out.
  • Keepalive connections are misconfigured. Nginx opens a new TCP connection for every request instead of reusing connections, adding overhead and latency.

The fix depends on which of these is your root cause. Start with the error logs, then work through the solutions below.

Fix 1: Increase Proxy Timeout Values

The most common fix. Nginx has three timeout directives that control how long it waits for the upstream server:

  • proxy_connect_timeout — Time to establish a connection to the upstream. Default: 60s.
  • proxy_send_timeout — Time to send the request to the upstream. Default: 60s.
  • proxy_read_timeout — Time to wait for the upstream to send a response. Default: 60s. This is the one that triggers most 504 errors.

Open your Nginx config:

sudo nano /etc/nginx/sites-available/your-site.conf

Inside the location block that proxies to your upstream, add or adjust these values:

location / {
    proxy_pass http://127.0.0.1:3000;
    proxy_connect_timeout 300s;
    proxy_send_timeout 300s;
    proxy_read_timeout 300s;
}

This gives your upstream 5 minutes to respond. Adjust the value to match your actual needs. If your slowest legitimate request takes 90 seconds, set it to 120s to give some headroom.

You can also set these in the server or http block to apply them globally:

http {
    proxy_connect_timeout 300s;
    proxy_send_timeout 300s;
    proxy_read_timeout 300s;

    server {
        # ...
    }
}

Test and reload:

sudo nginx -t && sudo systemctl reload nginx

Pro Tip: Increasing timeouts is a band-aid, not a cure. If your backend regularly takes more than 60 seconds, the real fix is optimizing the backend. Use timeouts to stop the bleeding while you investigate the root cause. Long timeouts also tie up Nginx worker connections, which can degrade performance under load.

Fix 2: Fix the Upstream Server

The 504 might not be a timeout configuration problem at all. Your upstream server might be genuinely broken. Check if it’s running:

sudo systemctl status your-app

If it’s a Node.js app:

ps aux | grep node

If it’s a Python/Gunicorn app:

ps aux | grep gunicorn

Test if the upstream responds directly, bypassing Nginx:

curl -v http://127.0.0.1:3000/

If curl also hangs or times out, the problem is your backend, not Nginx. Common causes:

  • The app crashed. Restart it. Check its logs for errors.
  • The app is stuck in an infinite loop or deadlock. Check CPU usage with top or htop.
  • The app ran out of memory. Check with free -m and look for OOM killer messages in dmesg.
  • The database connection pool is exhausted. The app is waiting for a database connection that never becomes available.

If the upstream responds fine via curl but Nginx still returns 504, the issue is in the Nginx configuration or the network path between them. If your app is failing to start entirely, check the systemctl service troubleshooting guide for help diagnosing service failures.

Fix 3: Adjust PHP-FPM Timeouts (FastCGI)

If Nginx proxies to PHP-FPM using FastCGI, the timeout directives are different. You need fastcgi_read_timeout instead of proxy_read_timeout:

location ~ \.php$ {
    fastcgi_pass unix:/run/php/php-fpm.sock;
    fastcgi_read_timeout 300s;
    fastcgi_connect_timeout 300s;
    fastcgi_send_timeout 300s;
    include fastcgi_params;
}

You also need to increase PHP-FPM’s own timeout. Edit the PHP-FPM pool config:

sudo nano /etc/php/8.2/fpm/pool.d/www.conf

Find or add:

request_terminate_timeout = 300

And update php.ini as well:

sudo nano /etc/php/8.2/fpm/php.ini
max_execution_time = 300

Restart both:

sudo systemctl restart php8.2-fpm
sudo systemctl reload nginx

Note: All three timeouts (Nginx FastCGI, PHP-FPM pool, and php.ini) must be aligned. If max_execution_time is 30 seconds but fastcgi_read_timeout is 300 seconds, PHP will kill the script at 30 seconds and you’ll likely get a 502 Bad Gateway instead of a 504.

Fix 4: Enable and Tune Keepalive Connections

By default, Nginx opens a new TCP connection to the upstream for every request. If your backend handles many requests, this creates overhead from constant TCP handshakes. Under load, it can lead to connection exhaustion and 504 errors.

Enable keepalive connections to your upstream:

upstream backend {
    server 127.0.0.1:3000;
    keepalive 32;
}

server {
    location / {
        proxy_pass http://backend;
        proxy_http_version 1.1;
        proxy_set_header Connection "";
    }
}

Key points:

  • keepalive 32 sets the maximum number of idle keepalive connections to the upstream that each Nginx worker process caches. This is not a connection limit — it’s a connection pool size.
  • proxy_http_version 1.1 is required. HTTP/1.0 does not support keepalive.
  • proxy_set_header Connection "" clears the Connection: close header that Nginx adds by default.

If you skip proxy_http_version 1.1 or proxy_set_header Connection "", keepalive will not work. Every request will still open a new connection.

For high-traffic sites, increase the keepalive pool:

upstream backend {
    server 127.0.0.1:3000;
    keepalive 64;
    keepalive_requests 1000;
    keepalive_timeout 60s;
}

Test and reload:

sudo nginx -t && sudo systemctl reload nginx

Fix 5: Configure Upstream Health Checks with max_fails

When an upstream server becomes slow or unresponsive, Nginx can keep sending requests to it, causing a pile-up of 504 errors. Use max_fails and fail_timeout to temporarily remove unhealthy upstreams from the pool:

upstream backend {
    server 127.0.0.1:3000 max_fails=3 fail_timeout=30s;
    server 127.0.0.1:3001 max_fails=3 fail_timeout=30s;
}

This means:

  • If a server fails 3 times within 30 seconds, Nginx marks it as unavailable.
  • Nginx stops sending requests to that server for the next 30 seconds.
  • After 30 seconds, Nginx tries the server again with a single request. If it responds, the server is back in the pool.

A “fail” here includes timeouts (504), connection refused errors, and other upstream failures.

Warning: If you only have one upstream server, max_fails won’t help much — there’s nowhere else to route traffic. It’s most useful with multiple backend instances behind a load balancer.

For environments with multiple upstreams, combine this with the least_conn balancing method to avoid overloading a single server:

upstream backend {
    least_conn;
    server 127.0.0.1:3000 max_fails=3 fail_timeout=30s;
    server 127.0.0.1:3001 max_fails=3 fail_timeout=30s;
    server 127.0.0.1:3002 max_fails=3 fail_timeout=30s;
}

Fix 6: Debug with Nginx Error Logs

If the fixes above don’t solve it, dig into the logs. The Nginx error log tells you exactly what happened.

Check the error log:

sudo tail -100 /var/log/nginx/error.log

Filter for upstream timeout entries:

sudo grep "upstream timed out" /var/log/nginx/error.log | tail -20

A typical 504 log entry looks like this:

2026/03/09 14:22:31 [error] 1234#0: *5678 upstream timed out
(110: Connection timed out) while reading response header from upstream,
client: 192.168.1.10, server: example.com, request: "GET /api/reports HTTP/1.1",
upstream: "http://127.0.0.1:3000/api/reports", host: "example.com"

This tells you:

  • Which request caused the 504 (GET /api/reports)
  • Which upstream timed out (127.0.0.1:3000)
  • What phase timed out (while reading response header — meaning the connection was established, but the upstream didn’t send a response in time)

The phase matters:

Log messageMeaning
while connecting to upstreamNginx couldn’t establish a TCP connection. The upstream may be down or the port is wrong.
while reading response header from upstreamThe upstream accepted the connection but didn’t respond in time. The backend is too slow or hung.
while sending request to upstreamNginx couldn’t send the full request. Network issue or upstream closed the connection.

If you see while connecting to upstream, your backend likely isn’t running or is listening on the wrong port. That’s actually closer to a 502 Bad Gateway scenario. Verify the upstream is running and the port matches your proxy_pass directive. If you’re getting connection refused errors when testing directly, the ERR_CONNECTION_REFUSED troubleshooting guide covers the most common causes.

Enable debug logging temporarily for more detail:

error_log /var/log/nginx/error.log debug;

Reload Nginx and reproduce the issue. The debug log is extremely verbose — disable it after you’ve captured what you need:

sudo nginx -t && sudo systemctl reload nginx
# Reproduce the 504, then check logs
sudo tail -500 /var/log/nginx/error.log

Warning: Debug-level logging generates massive amounts of data on production servers. Only enable it briefly, and on specific server blocks if possible.

Fix 7: Increase Upstream Worker Capacity

Your upstream might be responding to some requests but not fast enough to handle the volume. If all worker threads or processes are busy, new requests queue up and time out.

For Node.js apps, check if the event loop is blocked. A single CPU-intensive operation blocks all other requests. Consider using worker threads or moving heavy computation to a background job queue.

For Gunicorn (Python), increase workers:

gunicorn app:app --workers 4 --timeout 120

The --timeout flag sets how long Gunicorn waits for a worker to respond before killing it. Set this higher than your Nginx proxy_read_timeout to avoid confusing dual-timeout scenarios.

For PHP-FPM, increase the pool size in /etc/php/8.2/fpm/pool.d/www.conf:

pm = dynamic
pm.max_children = 50
pm.start_servers = 10
pm.min_spare_servers = 5
pm.max_spare_servers = 20

Check if PHP-FPM is running out of workers:

sudo grep "server reached pm.max_children" /var/log/php8.2-fpm.log

If you see that message, increase pm.max_children. Each PHP-FPM worker uses around 20-40 MB of memory, so calculate based on your available RAM.

For Java/Tomcat apps, check the connector thread pool size in server.xml:

<Connector port="8080" maxThreads="200" connectionTimeout="20000" />

After adjusting worker counts, restart the upstream service and reload Nginx.

Fix 8: Check for DNS Resolution Delays

If your proxy_pass uses a domain name instead of an IP address, Nginx resolves it at startup and caches the result. But if you’re using a variable in proxy_pass, Nginx resolves the domain on every request, which can cause timeouts if DNS is slow:

# This resolves DNS on every request — can cause 504s if DNS is slow
location / {
    set $backend "http://backend.internal.example.com";
    proxy_pass $backend;
}

Fix this by specifying a resolver with a cache TTL:

location / {
    resolver 127.0.0.53 valid=30s;
    set $backend "http://backend.internal.example.com";
    proxy_pass $backend;
}

Or better, use an IP address directly if the upstream is on the same machine or a fixed internal IP:

proxy_pass http://127.0.0.1:3000;

Fix 9: Check for Firewall or Network Issues

A firewall between Nginx and the upstream can cause intermittent 504 errors. This is common in containerized environments or when Nginx and the backend are on different machines.

Check if Nginx can reach the upstream port:

curl -v http://127.0.0.1:3000/

If that works but Nginx still times out, check for firewall rules:

sudo iptables -L -n | grep 3000

On systems using firewalld:

sudo firewall-cmd --list-all

In Docker environments, make sure Nginx and the backend are on the same Docker network. If they’re in separate networks, Nginx can’t reach the backend container. If the upstream port shows as already bound or conflicting, the port 3000 conflict resolution guide can help you free it up.

Check for SELinux blocking network connections (common on RHEL/CentOS):

sudo setsebool -P httpd_can_network_connect 1

If SELinux is blocking Nginx from making outbound connections, the 504 will look identical to a timeout in the logs. If you’re running into broader Nginx permission issues, including SELinux-related blocks, the Nginx 403 Forbidden troubleshooting guide covers those scenarios in detail.

Common Mistake: In Kubernetes or Docker Compose setups, developers often use localhost or 127.0.0.1 in proxy_pass when the upstream is in a separate container. Containers have isolated network namespaces — localhost inside the Nginx container points to the Nginx container itself, not the backend. Use the container name or service name instead (e.g., proxy_pass http://backend:3000;).

Still Not Working?

If you’ve tried everything above and still get 504 errors:

  • Check for buffering issues. Nginx buffers upstream responses by default. For very large responses, this can cause delays. Try disabling buffering: proxy_buffering off; in your location block. This forces Nginx to pass the response directly to the client.

  • Check the upstream’s own logs. The backend might be logging errors that explain the slow response — a slow database query, an external API call that hangs, or a disk I/O bottleneck.

  • Monitor connection counts. Run ss -tlnp | grep :3000 to check how many connections are queued on your upstream port. If the backlog is full, new connections from Nginx will time out.

  • Look at system-level limits. Check open file descriptors with ulimit -n. Each connection uses a file descriptor. If the limit is too low (default 1024 on many systems), both Nginx and your upstream can run out.

  • Test with a simple upstream. Replace your real backend with a basic HTTP server that responds instantly. If the 504 goes away, the problem is definitively in your application code, not Nginx.

  • Check if the issue is intermittent. Sporadic 504 errors under load usually indicate capacity problems — not enough workers, not enough memory, or a slow dependency like a database. Use tools like ab or wrk to load-test and find the breaking point.

  • Review your cloud provider’s load balancer. If there’s an AWS ALB, GCP Load Balancer, or Cloudflare proxy in front of Nginx, that layer has its own timeout settings. An upstream timeout at the load balancer level will also show as a 504 to the end user, even if Nginx itself is configured correctly.

F

FixDevs

Solo developer based in Japan. Every solution is cross-referenced with official documentation and tested before publishing.

Was this article helpful?

Related Articles