Fix: PostgreSQL "sorry, too many clients already"
Quick Answer
How to fix PostgreSQL 'sorry, too many clients already' error — checking active connections, using connection pooling with PgBouncer, tuning max_connections, fixing ORM pool settings, and finding connection leaks.
The Error
PostgreSQL refuses new connections with:
FATAL: sorry, too many clients alreadyOr in your application:
Error: sorry, too many clients already
at Connection.parseE (/app/node_modules/pg/lib/connection.js:604:11)
# Python/psycopg2
psycopg2.OperationalError: FATAL: sorry, too many clients already
# Java/JDBC
org.postgresql.util.PSQLException: FATAL: sorry, too many clients alreadyThe application can’t open new database connections because PostgreSQL has reached its max_connections limit.
Why This Happens
PostgreSQL has a hard limit on concurrent connections set by the max_connections configuration parameter (default: 100). Each connection consumes shared memory (~5-10 MB) regardless of whether it’s idle or active.
- No connection pooling — each application instance opens its own pool of connections. With 5 servers × 20 connections each = 100 connections, you hit the limit before accounting for migrations, admin tools, or monitoring.
- Connection leaks — the application opens connections but doesn’t close them properly (missing
finallyblocks, forgottenclient.release()calls). - ORM pool misconfigured — Sequelize, TypeORM, or Prisma pool is configured with too many connections per process.
- Serverless/edge functions — each function invocation opens a new connection. With hundreds of concurrent invocations, connections explode.
- Long-running transactions — transactions that aren’t committed or rolled back hold connections open indefinitely.
max_connectionstoo low — the default of 100 is appropriate for small setups but insufficient for production with multiple services.
Step 1: Check Current Connection Count
Before changing anything, see the current state:
-- Total connections by state
SELECT state, count(*)
FROM pg_stat_activity
GROUP BY state
ORDER BY count DESC;
-- Who's connected and what they're running
SELECT
pid,
usename,
application_name,
client_addr,
state,
query_start,
state_change,
left(query, 80) AS query
FROM pg_stat_activity
WHERE datname = 'your_database_name'
ORDER BY query_start;
-- Connection count vs limit
SELECT count(*) AS current,
(SELECT setting::int FROM pg_settings WHERE name = 'max_connections') AS max
FROM pg_stat_activity;If most connections are idle, you have a pooling problem — connections are open but not doing work. If many are idle in transaction, you have uncommitted transaction leaks.
Fix 1: Add a Connection Pooler (PgBouncer)
PgBouncer sits between your app and PostgreSQL, multiplexing many application connections into fewer actual database connections. This is the most scalable fix for production:
# Install on Ubuntu/Debian
sudo apt install pgbouncer
# Config file: /etc/pgbouncer/pgbouncer.ini[databases]
mydb = host=127.0.0.1 port=5432 dbname=mydb
[pgbouncer]
listen_port = 6432
listen_addr = 127.0.0.1
auth_type = md5
auth_file = /etc/pgbouncer/userlist.txt
# Pool mode: transaction is most efficient
pool_mode = transaction
# Max connections PgBouncer will open to PostgreSQL
server_pool_size = 20
# Max clients that can connect to PgBouncer
max_client_conn = 1000
# Connections per user+database combination
default_pool_size = 20# userlist.txt — hashed passwords
"myuser" "md5<hash>"
# Generate hash
echo -n "passwordmyuser" | md5sum | awk '{print "md5" $1}'Your application connects to PgBouncer on port 6432 instead of PostgreSQL on 5432. PgBouncer handles 1000 client connections using only 20 actual PostgreSQL connections.
Pool modes:
| Mode | Description | Use case |
|---|---|---|
session | One server connection per client session | Long-lived connections |
transaction | Server connection held only during a transaction | Most web apps |
statement | Released after each statement | Simple read workloads |
transaction mode is the most efficient for web applications — each connection is only held during an active transaction.
Note: In
transactionpool mode, PostgreSQL features that depend on session state (prepared statements,SET LOCAL, advisory locks,LISTEN/NOTIFY) don’t work reliably. Disable prepared statements in your ORM when using PgBouncer in transaction mode.
Fix 2: Configure ORM Connection Pool Correctly
Many ORMs open too many connections by default. Configure the pool size based on your PostgreSQL max_connections:
Rule of thumb: pool_size per process = (max_connections - reserved) / num_app_instances
For example, with max_connections = 100, 10 reserved for admin tools, and 4 app instances: (100 - 10) / 4 = 22 connections per instance
Prisma:
// schema.prisma
datasource db {
provider = "postgresql"
url = env("DATABASE_URL")
// Connection pool URL format:
// ?connection_limit=10&pool_timeout=30
}// Or via DATABASE_URL
// DATABASE_URL="postgresql://user:pass@host/db?connection_limit=10&pool_timeout=30"
const prisma = new PrismaClient({
datasources: {
db: {
url: process.env.DATABASE_URL,
},
},
log: ['warn', 'error'],
});Sequelize:
const sequelize = new Sequelize(DATABASE_URL, {
dialect: 'postgres',
pool: {
max: 10, // Maximum connections
min: 2, // Minimum connections kept open
acquire: 30000, // Max time to wait for a connection (ms)
idle: 10000, // Time before idle connection is closed (ms)
},
});TypeORM:
createConnection({
type: 'postgres',
url: process.env.DATABASE_URL,
extra: {
max: 10, // Pool size
idleTimeoutMillis: 30000,
connectionTimeoutMillis: 5000,
},
});node-postgres (pg) directly:
import { Pool } from 'pg';
const pool = new Pool({
connectionString: process.env.DATABASE_URL,
max: 10, // Max connections in pool
idleTimeoutMillis: 30000, // Close idle connections after 30s
connectionTimeoutMillis: 5000, // Fail if no connection available in 5s
});
// ALWAYS release connections back to the pool
async function query(sql: string, params?: any[]) {
const client = await pool.connect();
try {
return await client.query(sql, params);
} finally {
client.release(); // ← Critical: missing this causes connection leaks
}
}Fix 3: Fix Connection Leaks
If connections are accumulating even with a pool configured, there’s a connection leak — code that acquires connections but doesn’t release them.
Find long-lived idle connections:
SELECT pid, usename, application_name, state,
now() - state_change AS idle_duration,
left(query, 80) AS last_query
FROM pg_stat_activity
WHERE state = 'idle'
AND state_change < now() - interval '10 minutes'
ORDER BY idle_duration DESC;Kill leaked connections:
-- Kill a specific connection by PID
SELECT pg_terminate_backend(pid)
FROM pg_stat_activity
WHERE pid = 12345;
-- Kill all idle connections older than 30 minutes
SELECT pg_terminate_backend(pid)
FROM pg_stat_activity
WHERE state = 'idle'
AND state_change < now() - interval '30 minutes'
AND pid <> pg_backend_pid();Common leak patterns in Node.js:
// WRONG — no finally block; connection leaked on error
async function badQuery() {
const client = await pool.connect();
const result = await client.query('SELECT ...'); // If this throws, release() is never called
client.release();
return result;
}
// CORRECT — release in finally
async function goodQuery() {
const client = await pool.connect();
try {
return await client.query('SELECT ...');
} finally {
client.release();
}
}
// SIMPLEST — use pool.query() directly (auto-releases)
async function simpleQuery() {
return pool.query('SELECT ...'); // Pool manages connect/release automatically
}Fix 4: Handle Serverless Functions
In serverless environments (AWS Lambda, Vercel Functions, Cloudflare Workers), each invocation may create a new connection. With concurrent requests, this quickly exhausts max_connections.
Use a managed connection pooler — AWS RDS Proxy, Supabase Pooler (PgBouncer), or Neon’s built-in pooling.
Or reuse the pool across invocations (warm Lambda instances):
// db.ts — module-level pool, shared across invocations in the same Lambda instance
import { Pool } from 'pg';
let pool: Pool | null = null;
export function getPool(): Pool {
if (!pool) {
pool = new Pool({
connectionString: process.env.DATABASE_URL,
max: 2, // Keep small — multiple Lambdas run concurrently
idleTimeoutMillis: 30000,
});
}
return pool;
}Use Prisma’s Accelerate or Data Proxy for serverless Prisma deployments — it acts as a connection pooler at the edge.
Fix 5: Increase max_connections
If you’ve implemented pooling and still hit the limit, increase max_connections in postgresql.conf:
# Find postgresql.conf
psql -c "SHOW config_file;"
# Edit the file
sudo nano /etc/postgresql/16/main/postgresql.conf# Default is 100
max_connections = 200
# Increase shared_buffers proportionally (required)
# Roughly: shared_buffers = total_ram / 4
shared_buffers = 256MB# Restart PostgreSQL (requires restart, not just reload)
sudo systemctl restart postgresqlCaveats: Each connection requires ~5-10 MB of shared memory. Increasing max_connections to 500 requires ~2.5-5 GB RAM just for connection overhead. More connections also increases lock contention and context switching. Pooling is almost always preferable to simply raising the limit.
Still Not Working?
Check for long-running idle transactions:
SELECT pid, now() - xact_start AS duration, query
FROM pg_stat_activity
WHERE state = 'idle in transaction'
ORDER BY duration DESC;idle in transaction connections hold locks and prevent vacuuming. Add statement timeouts to auto-rollback stuck transactions:
-- Set in postgresql.conf or per-session
SET idle_in_transaction_session_timeout = '5min';Check reserved connections — PostgreSQL reserves superuser_reserved_connections (default: 3) connections for superuser access. If max_connections = 100 and all 100 are taken, only a superuser can connect. This is why you’re still locked out even after terminating connections — your app user can’t use the reserved slots.
Monitor connection count continuously:
-- Create a view for easy monitoring
CREATE VIEW connection_stats AS
SELECT
max_conn,
used,
res_for_super,
max_conn - used - res_for_super AS available
FROM (
SELECT count(*) used FROM pg_stat_activity
) t1,
(SELECT setting::int AS max_conn FROM pg_settings WHERE name = 'max_connections') t2,
(SELECT setting::int AS res_for_super FROM pg_settings WHERE name = 'superuser_reserved_connections') t3;
SELECT * FROM connection_stats;For related database issues, see Fix: Postgres Connection Refused and Fix: MySQL Too Many Connections.
Solo developer based in Japan. Every solution is cross-referenced with official documentation and tested before publishing.
Was this article helpful?
Related Articles
Fix: MySQL Replication Not Working — Replica Lag, Stopped Replication, or GTID Errors
How to fix MySQL replication issues — SHOW REPLICA STATUS errors, relay log corruption, GTID configuration, replication lag, skipping errors, and replica promotion.
Fix: PostgreSQL JSONB Query Not Working — Operator Errors, Wrong Results, or Slow Queries
How to fix PostgreSQL JSONB query issues — -> vs ->> operators, @> containment, GIN indexes, type casting, array queries, and indexing strategies for JSONB columns.
Fix: PostgreSQL Row Level Security Not Working — Policy Not Applied, All Rows Visible, or Permission Denied
How to fix PostgreSQL Row Level Security (RLS) issues — enabling RLS, policy expressions, BYPASSRLS role, SET ROLE, current_user vs session_user, and Supabase auth.uid() patterns.
Fix: Redis Cluster Not Working — MOVED, CROSSSLOT, or Connection Errors
How to fix Redis Cluster errors — MOVED redirects, CROSSSLOT multi-key operations, cluster-aware client setup, hash tags for key grouping, and failover handling.