Fix: Express Rate Limit Not Working — express-rate-limit Requests Not Throttled
Quick Answer
How to fix Express rate limiting not working — middleware order, trust proxy for reverse proxies, IP detection, store configuration, custom key generation, and bypassing issues.
The Problem
express-rate-limit middleware is configured but requests aren’t being throttled:
const rateLimit = require('express-rate-limit');
const limiter = rateLimit({
windowMs: 15 * 60 * 1000, // 15 minutes
max: 100,
});
app.use(limiter);
// All requests go through — no 429 responses after limit exceededOr the limit is applied but every request appears to come from the same IP:
// All requests rate-limited together even from different users
// X-Forwarded-For: 10.0.0.1 (internal load balancer IP)
// actual client IPs are lostOr rate limiting works in development but not in production (behind nginx/load balancer):
// Request headers in production:
// X-Forwarded-For: 10.0.0.1
// Remote address: 172.31.0.5 (internal LB IP)
// All clients share one rate limit bucketOr the rate limit resets on every server restart:
// In-memory store (default) — resets when process restarts
// In a multi-process or multi-instance deployment, different instances
// don't share rate limit state — each tracks limits independentlyWhy This Happens
express-rate-limit identifies clients by their IP address by default. Several things cause it to malfunction:
- Wrong IP detection — behind a reverse proxy (nginx, AWS ALB, Cloudflare), the actual client IP is in the
X-Forwarded-Forheader, notreq.ip. Withoutapp.set('trust proxy', 1), all clients appear to share the proxy’s IP. - Middleware registered after routes — Express applies middleware in registration order. A rate limiter registered after a route definition doesn’t protect that route.
- Default in-memory store — the memory store is per-process. In clustered Node.js or multi-instance deployments, each process tracks limits independently. A user can exceed the limit N times where N is the number of instances.
skiporkeyGeneratormisconfigured — customskipfunctions returningtruebypass limiting for all requests, and badkeyGeneratorfunctions give all clients the same key.max: 0or disabled —max: 0in express-rate-limit v6+ means “no limit” (was “block all” in earlier versions). Check your version’s behavior.
Fix 1: Configure Trust Proxy Correctly
This is the most common production issue. Behind a reverse proxy, tell Express to trust the X-Forwarded-For header:
const express = require('express');
const rateLimit = require('express-rate-limit');
const app = express();
// CRITICAL for deployments behind nginx, ALB, Cloudflare, etc.
// This tells Express to use X-Forwarded-For as the client IP
app.set('trust proxy', 1);
// '1' = trust first proxy in the chain (most common)
// 'loopback' = trust loopback addresses (127.0.0.1, ::1)
// true = trust ALL proxies (not recommended — spoofable)
// number = trust N hops of proxies
// Apply rate limiter AFTER setting trust proxy
const limiter = rateLimit({
windowMs: 15 * 60 * 1000,
max: 100,
standardHeaders: true, // Include RateLimit-* headers in responses
legacyHeaders: false, // Disable X-RateLimit-* deprecated headers
});
app.use(limiter);Verify the correct IP is being detected:
// Temporary debug route — add before limiter in development
app.use((req, res, next) => {
console.log('Client IP:', req.ip);
console.log('X-Forwarded-For:', req.headers['x-forwarded-for']);
console.log('Remote address:', req.socket.remoteAddress);
next();
});If clients still share an IP — your proxy may not be setting X-Forwarded-For. Add it in nginx:
location / {
proxy_pass http://localhost:3000;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
}Fix 2: Apply Middleware in the Correct Order
Express applies middleware in registration order. Rate limiters must be registered before routes:
// WRONG — rate limiter registered after the route
app.get('/api/data', (req, res) => {
res.json({ data: 'response' });
});
app.use(limiter); // Never runs for /api/data — route matched first
// CORRECT — rate limiter before routes
app.use(limiter); // Applies to all routes below
app.get('/api/data', (req, res) => {
res.json({ data: 'response' });
});Apply different limits to different route groups:
const rateLimit = require('express-rate-limit');
// Strict limit for auth endpoints (prevent brute force)
const authLimiter = rateLimit({
windowMs: 15 * 60 * 1000, // 15 minutes
max: 10, // 10 attempts per window
message: { error: 'Too many login attempts. Try again in 15 minutes.' },
skipSuccessfulRequests: true, // Don't count successful logins
});
// General API limit
const apiLimiter = rateLimit({
windowMs: 60 * 1000, // 1 minute
max: 60, // 60 requests per minute
});
// Expensive operations
const heavyLimiter = rateLimit({
windowMs: 60 * 1000,
max: 5,
message: { error: 'Rate limit exceeded for this endpoint.' },
});
// Apply per-route
app.use('/api/', apiLimiter); // All /api/* routes
app.use('/auth/login', authLimiter); // Login endpoint
app.use('/auth/register', authLimiter); // Registration
app.use('/api/export', heavyLimiter); // CSV/report exportFix 3: Use a Shared Store for Multi-Instance Deployments
The default in-memory store doesn’t work across multiple processes or servers:
# Redis store for distributed rate limiting
npm install rate-limit-redis ioredisconst rateLimit = require('express-rate-limit');
const RedisStore = require('rate-limit-redis');
const Redis = require('ioredis');
const redis = new Redis({
host: process.env.REDIS_HOST || 'localhost',
port: parseInt(process.env.REDIS_PORT || '6379'),
// Connection pooling for high-traffic apps
maxRetriesPerRequest: 3,
});
const limiter = rateLimit({
windowMs: 15 * 60 * 1000,
max: 100,
standardHeaders: true,
legacyHeaders: false,
// Redis store — shared across all instances
store: new RedisStore({
sendCommand: (...args) => redis.call(...args),
prefix: 'rate_limit:', // Redis key prefix
}),
});
app.use(limiter);Memcached store alternative:
npm install rate-limit-memcachedconst MemcachedStore = require('rate-limit-memcached');
const limiter = rateLimit({
store: new MemcachedStore({
locations: ['localhost:11211'],
prefix: 'rl:',
}),
});Fix 4: Custom Key Generators
Rate limit by user ID, API key, or a combination instead of raw IP:
const limiter = rateLimit({
windowMs: 60 * 1000,
max: 100,
// Rate limit authenticated users by their user ID
// Rate limit unauthenticated requests by IP
keyGenerator: (req) => {
if (req.user?.id) {
return `user:${req.user.id}`; // Authenticated — by user ID
}
return `ip:${req.ip}`; // Anonymous — by IP
},
// Skip rate limiting for internal services
skip: (req) => {
const apiKey = req.headers['x-api-key'];
return apiKey === process.env.INTERNAL_API_KEY;
},
});
// API key-based rate limiting
const apiKeyLimiter = rateLimit({
windowMs: 60 * 1000,
max: 1000,
keyGenerator: (req) => {
// Rate limit by API key — allows different limits per tier later
return req.headers['x-api-key'] || req.ip;
},
// Dynamic max based on the request context
// (Note: max must be a number — use skip for dynamic allow/deny)
});Rate limit by endpoint + IP combination:
const limiter = rateLimit({
windowMs: 60 * 1000,
max: 10,
// Different buckets for different endpoints
keyGenerator: (req) => {
return `${req.ip}:${req.path}`;
// e.g., "203.0.113.1:/api/login" and "203.0.113.1:/api/data" are separate buckets
},
});Fix 5: Handle Rate Limit Responses
Customize the response when the limit is exceeded:
const limiter = rateLimit({
windowMs: 15 * 60 * 1000,
max: 100,
standardHeaders: true, // Sends: RateLimit-Limit, RateLimit-Remaining, RateLimit-Reset
legacyHeaders: false,
// Custom response when limit exceeded
handler: (req, res, next, options) => {
const retryAfter = Math.ceil(options.windowMs / 1000);
res.status(options.statusCode).json({
error: 'Rate limit exceeded',
message: `Too many requests. Try again in ${retryAfter} seconds.`,
retryAfter,
});
},
// Or just set the message
message: {
status: 429,
error: 'Too many requests',
retryAfter: 900, // Seconds until window resets
},
statusCode: 429, // Default is 429
});Client-side — read and respect rate limit headers:
// Frontend code — check rate limit headers
async function apiRequest(url) {
const response = await fetch(url);
if (response.status === 429) {
const retryAfter = response.headers.get('Retry-After');
const resetTime = response.headers.get('RateLimit-Reset');
throw new RateLimitError(
`Rate limited. Retry after ${retryAfter} seconds.`,
parseInt(retryAfter || '60')
);
}
return response.json();
}Fix 6: Whitelist Trusted IPs
Skip rate limiting for monitoring services, health checks, or internal IPs:
const limiter = rateLimit({
windowMs: 60 * 1000,
max: 100,
skip: (req) => {
const trustedIPs = [
'127.0.0.1', // Localhost
'10.0.0.0/8', // Internal network (needs IP range check)
'::1', // IPv6 localhost
];
// Simple IP check (use a library like 'ip-range-check' for CIDR)
return trustedIPs.includes(req.ip);
},
});
// Skip for health check endpoints
const appLimiter = rateLimit({
windowMs: 60 * 1000,
max: 100,
skip: (req) => req.path === '/health' || req.path === '/ready',
});Fix 7: Debug Rate Limiting Issues
When rate limiting isn’t working as expected:
const limiter = rateLimit({
windowMs: 60 * 1000,
max: 5, // Low number for testing
standardHeaders: true,
// Log every request for debugging
keyGenerator: (req) => {
const key = req.ip;
console.log(`Rate limit key: ${key}, Path: ${req.path}`);
return key;
},
handler: (req, res, next, options) => {
console.log(`Rate limit exceeded: ${req.ip} on ${req.path}`);
res.status(429).json({ error: 'Rate limit exceeded' });
},
// Log skip decisions
skip: (req) => {
const skipped = req.path === '/health';
if (skipped) console.log(`Skipping rate limit for: ${req.path}`);
return skipped;
},
});
// Test rate limiting manually
// curl -v http://localhost:3000/api/data
// Look for headers:
// RateLimit-Limit: 5
// RateLimit-Remaining: 4
// RateLimit-Reset: 1711234567Check the response headers to verify the limiter is active:
# Send 6 requests — 6th should return 429
for i in {1..6}; do
echo "Request $i:"
curl -s -o /dev/null -w "%{http_code}\n" \
-H "X-Forwarded-For: 192.168.1.100" \
http://localhost:3000/api/endpoint
done
# Expected: 200 200 200 200 200 429Still Not Working?
express-rate-limit version differences — v6 changed the default max behavior (0 now means unlimited instead of block-all). v7 changed header names. Check the changelog for your version.
Multiple limiter instances sharing state — if you create two rateLimit() instances without specifying different Redis key prefixes, they share the same counters. Use unique prefix values for each limiter.
Reverse proxy headers not being forwarded — AWS ALB, Cloudflare, and other proxies may strip or rename forwarded headers. Verify with a debug endpoint that logs all headers and check that X-Forwarded-For contains the actual client IP.
Rate limiting and CORS preflight — browser CORS preflight requests (OPTIONS) count toward rate limits. Consider skipping rate limiting for OPTIONS requests if this causes issues:
skip: (req) => req.method === 'OPTIONS',For related security issues, see Fix: Express CORS Error and Fix: Node.js Uncaught Exception.
Solo developer based in Japan. Every solution is cross-referenced with official documentation and tested before publishing.
Was this article helpful?
Related Articles
Fix: Express req.body Is undefined
How to fix req.body being undefined in Express — missing body-parser middleware, wrong Content-Type header, middleware order issues, and multipart form data handling.
Fix: Fastify Not Working — 404, Plugin Encapsulation, and Schema Validation Errors
How to fix Fastify issues — route 404 from plugin encapsulation, reply already sent, FST_ERR_VALIDATION, request.body undefined, @fastify/cors, hooks not running, and TypeScript type inference.
Fix: Better Auth Not Working — Login Failing, Session Null, or OAuth Callback Error
How to fix Better Auth issues — server and client setup, email/password and OAuth providers, session management, middleware protection, database adapters, and plugin configuration.
Fix: jose JWT Not Working — Token Verification Failing, Invalid Signature, or Key Import Errors
How to fix jose JWT issues — signing and verifying tokens with HS256 and RS256, JWK and JWKS key handling, token expiration, claims validation, and edge runtime compatibility.