Fix: ECONNRESET Socket Hang Up Error in Node.js
Quick Answer
How to fix the ECONNRESET and socket hang up errors in Node.js caused by server timeouts, keep-alive misconfiguration, proxy issues, large request bodies, SSL/TLS failures, and connection pool exhaustion.
The Error
You make an HTTP request in Node.js and get one of these:
Error: socket hang up
at connResetException (internal/errors.js:628:14)
at Socket.socketOnEnd (node:_http_client:524:23)
code: 'ECONNRESET'Error: read ECONNRESET
at TCP.onStreamRead (node:internal/stream_base_commons:217:20)
errno: -4077,
code: 'ECONNRESET',
syscall: 'read'Or inside an Express/Fastify server:
Error: write ECONNRESET
at WriteWrap.onWriteComplete [as oncomplete] (node:internal/stream_base_commons:94:16)
code: 'ECONNRESET',
syscall: 'write'The connection drops mid-request. Your HTTP call fails, your server logs an unhandled error, or your API gateway returns a 502.
Why This Happens
ECONNRESET means the remote side (or a middlebox between you and the remote side) forcibly closed the TCP connection. The “socket hang up” variant means Node.js expected more data but the socket closed before the response was complete.
The most common causes:
Server timeout — The upstream server closed the connection because your request took too long. Node.js default server timeout is 0, but many frameworks and reverse proxies set their own.
Keep-alive mismatch — The client keeps a connection open, but the server closes idle connections sooner than the client expects. The next request on that “dead” connection gets
ECONNRESET.Proxy or load balancer timeout — An AWS ALB, Nginx reverse proxy, or Cloudflare edge closes the connection before the backend responds. The default idle timeout on AWS ALB is 60 seconds.
Request body too large — The server rejects the body before reading it entirely and closes the connection. Node.js sees a reset instead of a clean HTTP 413 response.
SSL/TLS handshake failure — Certificate mismatch, expired cert, unsupported TLS version, or a corporate proxy intercepting HTTPS traffic.
Connection pool exhaustion — The HTTP agent runs out of sockets. Queued requests time out or get reset when the pool cannot allocate a new connection.
DNS resolution flaps — The resolved IP changes between requests (rolling deploys, DNS failover), and the old connection gets reset by a server that no longer recognizes it.
Server crash or restart — The upstream process dies or restarts (deploy, OOM kill, uncaught exception) while your request is in flight.
Fix 1: Increase Server Timeout Settings
The most common cause is a timeout mismatch. If you control the Node.js server, increase the relevant timeouts:
const http = require('http');
const server = http.createServer((req, res) => {
// your handler
});
// Time the server waits for the entire request (headers + body)
server.requestTimeout = 300000; // 5 minutes (Node 18.0+)
// Time the server waits for headers only
server.headersTimeout = 60000; // 60 seconds
// Time to keep idle connections open
server.keepAliveTimeout = 65000; // 65 seconds
// Maximum time to wait between packets
server.timeout = 120000; // 2 minutes (0 = no timeout)
server.listen(3000);In Express:
const app = express();
const server = app.listen(3000);
server.keepAliveTimeout = 65000;
server.headersTimeout = 66000; // Must be greater than keepAliveTimeout
server.requestTimeout = 300000;The critical rule: headersTimeout must always be greater than keepAliveTimeout. If it is not, Node.js may close the connection before the keep-alive timeout fires, causing ECONNRESET on the client side.
If you are the client making the request and the upstream server times out, you need to either speed up the upstream response or increase the upstream timeout. You cannot fix a server-side timeout from the client.
Pro Tip: When running behind a reverse proxy like Nginx, set your Node.js
keepAliveTimeoutto be higher than the proxy’sproxy_read_timeout. If Nginx has a 60-second timeout, set Node.js to 65 seconds. This prevents the proxy from sending a request on a connection that Node.js just closed. See Nginx 504 Gateway Timeout for proxy timeout tuning.
Fix 2: Configure Keep-Alive Properly
Keep-alive connection reuse is a major source of ECONNRESET. The client thinks a connection is alive, sends a request, and the server has already closed it.
On the client side, use an http.Agent with explicit keep-alive settings:
const http = require('http');
const https = require('https');
const agent = new http.Agent({
keepAlive: true,
keepAliveMsecs: 30000, // Send TCP keep-alive probes every 30s
maxSockets: 50, // Max concurrent sockets per host
maxFreeSockets: 10, // Max idle sockets to keep in the pool
timeout: 60000, // Socket timeout
});
const options = {
hostname: 'api.example.com',
port: 80,
path: '/data',
agent: agent,
};
http.get(options, (res) => {
// handle response
});With fetch (Node 18+):
const { Agent } = require('undici');
const agent = new Agent({
keepAliveTimeout: 30000,
keepAliveMaxTimeout: 60000,
connections: 50,
});
const response = await fetch('https://api.example.com/data', {
dispatcher: agent,
});With axios:
const axios = require('axios');
const http = require('http');
const https = require('https');
const httpAgent = new http.Agent({ keepAlive: true, keepAliveMsecs: 30000 });
const httpsAgent = new https.Agent({ keepAlive: true, keepAliveMsecs: 30000 });
const client = axios.create({
httpAgent,
httpsAgent,
timeout: 30000,
});On the server side, make sure the server’s keepAliveTimeout is longer than the client’s keep-alive interval. If the client sends keep-alive probes every 30 seconds, the server should keep connections open for at least 35 seconds.
Fix 3: Handle Proxy and Load Balancer Timeouts
A reverse proxy or load balancer between your client and server can close connections independently of both.
AWS Application Load Balancer (ALB):
The default idle timeout is 60 seconds. If your backend takes longer than 60 seconds to respond, the ALB closes the connection and the client gets ECONNRESET.
Fix it in the AWS Console or CLI:
aws elbv2 modify-load-balancer-attributes \
--load-balancer-arn arn:aws:elasticloadbalancing:us-east-1:123456789:loadbalancer/app/my-alb/abc123 \
--attributes Key=idle_timeout.timeout_seconds,Value=300Nginx reverse proxy:
location /api/ {
proxy_pass http://backend;
proxy_connect_timeout 60s;
proxy_send_timeout 120s;
proxy_read_timeout 120s;
# Keep-alive to upstream
proxy_http_version 1.1;
proxy_set_header Connection "";
}Without proxy_http_version 1.1 and clearing the Connection header, Nginx defaults to HTTP/1.0 with Connection: close for upstream requests, which disables keep-alive and causes more connection resets. For deeper Nginx timeout issues, see Nginx upstream timed out.
Cloudflare:
Cloudflare has a 100-second timeout for responses on free plans (enterprise plans can increase this). If your server takes longer, Cloudflare drops the connection. You cannot change this on the free plan — optimize your backend to respond faster, or use Cloudflare’s Cache-Control headers for long-running responses.
Fix 4: Fix Request Body Too Large
If the upstream server has a body size limit and you exceed it, the server may reset the connection before sending an HTTP 413 response. This is common with Nginx, Express, and API gateways.
Nginx — increase client_max_body_size:
http {
client_max_body_size 50M;
}Express — increase the body parser limit:
const express = require('express');
const app = express();
app.use(express.json({ limit: '50mb' }));
app.use(express.urlencoded({ limit: '50mb', extended: true }));Fastify:
const fastify = require('fastify')({
bodyLimit: 52428800, // 50 MB in bytes
});If you are uploading files, consider streaming the upload instead of buffering the entire body in memory. This avoids body size limits and reduces memory usage. For more on Nginx body size errors, see Nginx 413 Request Entity Too Large.
Fix 5: Resolve SSL/TLS Issues
SSL/TLS problems cause ECONNRESET when the TLS handshake fails silently. The server rejects the connection at the transport layer, and Node.js reports it as a connection reset rather than a clear SSL error.
Check the certificate:
openssl s_client -connect api.example.com:443 -servername api.example.comLook for Verify return code: 0 (ok). Anything else means the certificate chain is broken.
Common SSL fixes:
- Expired certificate — Renew the certificate. Check expiry with:
echo | openssl s_client -connect api.example.com:443 2>/dev/null | openssl x509 -noout -dates- Self-signed certificate in development — Tell Node.js to trust it:
const https = require('https');
const fs = require('fs');
const agent = new https.Agent({
ca: fs.readFileSync('/path/to/custom-ca.pem'),
});
// Or for development only (never in production):
process.env.NODE_TLS_REJECT_UNAUTHORIZED = '0';Warning: Setting NODE_TLS_REJECT_UNAUTHORIZED=0 disables all certificate validation. Use it only for local debugging, never in production.
- TLS version mismatch — The server requires TLS 1.2+ but your Node.js version defaults to an older version. Force a minimum version:
const https = require('https');
const agent = new https.Agent({
minVersion: 'TLSv1.2',
maxVersion: 'TLSv1.3',
});- Corporate proxy intercepting HTTPS — Your corporate network uses a MITM proxy with its own CA. Add the corporate CA to Node.js:
export NODE_EXTRA_CA_CERTS=/path/to/corporate-ca.pem
node app.jsFor a deeper dive into SSL certificate errors across tools, see SSL certificate problem: unable to get local issuer certificate.
Fix 6: Add Retry Logic
Network connections fail. Even after fixing the root cause, transient ECONNRESET errors will happen. Your code must handle them.
Basic retry with exponential backoff:
async function fetchWithRetry(url, options = {}, maxRetries = 3) {
for (let attempt = 0; attempt <= maxRetries; attempt++) {
try {
const response = await fetch(url, options);
return response;
} catch (err) {
const isRetryable = ['ECONNRESET', 'ETIMEDOUT', 'ECONNREFUSED', 'UND_ERR_SOCKET'].some(
code => err.message?.includes(code) || err.cause?.code === code
);
if (!isRetryable || attempt === maxRetries) {
throw err;
}
const delay = Math.min(1000 * Math.pow(2, attempt), 10000);
const jitter = Math.random() * 500;
console.warn(`Attempt ${attempt + 1} failed with ${err.code || err.message}. Retrying in ${delay + jitter}ms...`);
await new Promise(resolve => setTimeout(resolve, delay + jitter));
}
}
}With axios:
const axios = require('axios');
const axiosRetry = require('axios-retry').default;
const client = axios.create({ timeout: 30000 });
axiosRetry(client, {
retries: 3,
retryDelay: axiosRetry.exponentialDelay,
retryCondition: (error) => {
return axiosRetry.isNetworkOrIdempotentRequestError(error)
|| error.code === 'ECONNRESET';
},
});With got:
const got = require('got');
const response = await got('https://api.example.com/data', {
retry: {
limit: 3,
methods: ['GET', 'POST'],
errorCodes: ['ECONNRESET', 'ETIMEDOUT', 'ECONNREFUSED'],
calculateDelay: ({ attemptCount }) => attemptCount * 1000,
},
timeout: { request: 30000 },
});Common Mistake: Retrying non-idempotent requests (like
POSTto create a resource) without an idempotency key can cause duplicate records. Only retryPOST/PATCH/PUTrequests if the API supports idempotency keys, or if the operation is safe to repeat.
Fix 7: Tune Connection Pooling
Connection pool misconfiguration causes ECONNRESET in two ways: too few sockets leads to queued requests timing out, and too many sockets overwhelms the server.
Check your current pool settings:
const http = require('http');
console.log(http.globalAgent.maxSockets); // Default: Infinity (Node 12+)
console.log(http.globalAgent.maxFreeSockets); // Default: 256Set explicit pool limits:
const http = require('http');
const https = require('https');
// For all requests using the default agent
http.globalAgent.maxSockets = 100;
http.globalAgent.maxFreeSockets = 20;
http.globalAgent.keepAlive = true;
https.globalAgent.maxSockets = 100;
https.globalAgent.maxFreeSockets = 20;
https.globalAgent.keepAlive = true;Per-host pooling with undici (Node 18+ built-in fetch):
const { Pool } = require('undici');
const pool = new Pool('https://api.example.com', {
connections: 50, // Max connections
pipelining: 1, // HTTP pipelining depth
keepAliveTimeout: 30000, // Idle timeout
keepAliveMaxTimeout: 60000,
connect: {
timeout: 10000, // Connection timeout
},
});
const { statusCode, body } = await pool.request({
path: '/data',
method: 'GET',
});Database connection pools can also cause ECONNRESET. If you use a database driver that maintains a TCP connection pool (like pg, mysql2, or mongoose), idle connections get reset by firewalls or the database server itself:
// PostgreSQL with pg
const { Pool } = require('pg');
const pool = new Pool({
max: 20,
idleTimeoutMillis: 30000,
connectionTimeoutMillis: 5000,
keepAlive: true,
keepAliveInitialDelayMillis: 10000,
});For MongoDB-specific connection issues, see MongoDB connect ECONNREFUSED.
Fix 8: Fix DNS Resolution Issues
DNS changes during active connections cause ECONNRESET. This happens during rolling deployments, DNS failovers, or when using short TTL records.
The problem: Node.js caches DNS results by default when using keep-alive connections. If the IP changes, existing connections still point to the old IP. The old server resets those connections.
Fix 1 — Disable DNS caching in the HTTP agent:
const http = require('http');
const dns = require('dns');
const agent = new http.Agent({
keepAlive: true,
keepAliveMsecs: 30000,
// Force DNS lookup for each new connection
lookup: (hostname, options, callback) => {
dns.resolve4(hostname, (err, addresses) => {
if (err) return callback(err);
callback(null, addresses[0], 4);
});
},
});Fix 2 — Use cacheable-lookup with a short TTL:
const CacheableLookup = require('cacheable-lookup');
const http = require('http');
const cacheable = new CacheableLookup({
maxTtl: 30, // Cache DNS for 30 seconds max
});
cacheable.install(http.globalAgent);Fix 3 — Handle DNS-related resets gracefully:
If you are connecting to a service behind a load balancer that rotates IPs, reduce keepAliveTimeout so connections recycle more frequently. A 30-second timeout ensures DNS changes propagate within 30 seconds:
const agent = new http.Agent({
keepAlive: true,
keepAliveMsecs: 10000,
maxFreeSockets: 10,
timeout: 30000,
scheduling: 'lifo', // Prefer recently used sockets (less likely to be stale)
});The scheduling: 'lifo' option (last-in, first-out) makes Node.js prefer the most recently used socket. This reduces the chance of using a socket that has been idle long enough for the DNS record to change.
Debugging ECONNRESET
If the fixes above do not resolve the issue, debug further:
Enable Node.js network tracing:
NODE_DEBUG=http,net,tls node app.jsThis logs every socket event: creation, data, close, error, and timeout. Look for the socket closing unexpectedly.
Capture the full error context:
process.on('uncaughtException', (err) => {
console.error('Uncaught:', err.message, err.code, err.syscall);
console.error('Stack:', err.stack);
// Decide whether to exit or continue
});
// For HTTP servers, handle client errors
server.on('clientError', (err, socket) => {
console.error('Client error:', err.message, err.code);
if (socket.writable) {
socket.end('HTTP/1.1 400 Bad Request\r\n\r\n');
}
});Use tcpdump or Wireshark:
# Capture traffic on port 443 to see who sends the RST packet
sudo tcpdump -i any port 443 -w capture.pcapOpen capture.pcap in Wireshark and filter for tcp.flags.reset == 1. This tells you exactly which side (client, server, or middlebox) sent the RST packet.
If you see connection issues specifically on localhost during development, check ERR_CONNECTION_REFUSED localhost for common local networking pitfalls.
Still Not Working?
If ECONNRESET persists after trying all fixes above:
Check firewall rules. Corporate firewalls, AWS security groups, and iptables rules can kill long-lived connections. Some firewalls drop idle TCP connections after a set period (often 300-900 seconds) without sending a RST. Enable TCP keep-alive probes to prevent this.
Check for memory leaks. A Node.js process running out of memory may fail to process incoming data fast enough, causing the OS to reset connections. Monitor RSS and heap usage with
process.memoryUsage().Check server-side logs. The
ECONNRESETon the client only tells you the connection was reset. The server logs (or the proxy/load balancer logs) will tell you why. Check Nginx error logs (/var/log/nginx/error.log), PM2 logs, or the upstream application logs.Check for connection limits. Linux defaults to a maximum of 1024 open file descriptors per process. If your Node.js process opens more connections than this, new connections fail. Increase the limit:
# Check current limit
ulimit -n
# Increase for the current session
ulimit -n 65535
# Permanent: add to /etc/security/limits.conf
* soft nofile 65535
* hard nofile 65535- Test with a different HTTP client. If
axiosfails butcurlworks, the problem is in your client configuration, not the server. Test with:
curl -v --keepalive-time 30 https://api.example.com/dataUpgrade Node.js. Several
ECONNRESETbugs have been fixed across Node.js versions, particularly in theundiciHTTP client used byfetch. If you are on Node 18.x, upgrade to the latest 18.x patch. If possible, move to Node 20 or 22 LTS.Check for race conditions in your code. If you close a socket or abort a request while another part of your code is still writing to it, you get
ECONNRESET. UseAbortControllerconsistently and never write to a socket after calling.destroy()or.end().
Solo developer based in Japan. Every solution is cross-referenced with official documentation and tested before publishing.
Was this article helpful?
Related Articles
Fix: Express Cannot GET /route (404 Not Found)
How to fix Express.js Cannot GET route 404 error caused by wrong route paths, missing middleware, route order issues, static files, and router mounting problems.
Fix: FATAL ERROR: Reached heap limit Allocation failed - JavaScript heap out of memory
How to fix the JavaScript heap out of memory error by increasing Node.js memory limits, fixing memory leaks, and optimizing builds in webpack, Vite, and Docker.
Fix: Error: error:0308010C:digital envelope routines::unsupported
How to fix the Node.js digital envelope routines unsupported error caused by OpenSSL 3.0 changes in Node.js 17+, with solutions for webpack, Vite, React, and Angular.
Fix: ERR_CONNECTION_REFUSED (localhost refused to connect)
How to fix 'ERR_CONNECTION_REFUSED', 'localhost refused to connect', and 'This site can't be reached' errors when accessing localhost in Chrome, Firefox, and Edge. Covers dev servers, port issues, 0.0.0.0 vs 127.0.0.1, Docker port mapping, WSL2, firewalls, and more.