Fix: Prisma Connection Pool Exhausted — Can't Acquire Connection from Pool
Quick Answer
How to fix Prisma connection pool errors — pool size configuration, connection leaks, serverless deployments, singleton pattern, query timeout, and pgBouncer integration.
The Problem
Prisma throws a connection pool error under load:
PrismaClientKnownRequestError:
Timed out fetching a new connection from the connection pool.
More info: http://pris.ly/d/connection-pool
(The pool timeout of 10s expired. Either increase the pool timeout or
increase the `connection_limit`)Or a serverless function runs out of database connections:
Error: Can't reach database server at `db.example.com:5432`
Please make sure your database server is running at `db.example.com:5432`.Or in development, connections accumulate and the database hits its max limit:
FATAL: sorry, too many clients already
(PostgreSQL max_connections: 100, current: 100)Or after deploying to Vercel/AWS Lambda, each function invocation opens new connections:
# 50 concurrent Lambda invocations × 10 connections each = 500 connections
# PostgreSQL max_connections = 100 → immediate exhaustionWhy This Happens
Prisma maintains a connection pool per PrismaClient instance. Each instance holds open database connections ready for queries. Connection pool exhaustion happens when:
- Too many
PrismaClientinstances — eachnew PrismaClient()creates its own pool. In serverless environments, every cold start creates a new instance with a new pool of connections. - Pool too small for concurrency — the default pool size (
connection_limit) may be too small for the application’s concurrency level. - Connection leaks — queries that throw errors inside transactions may not release connections back to the pool if not handled correctly.
- Long-running transactions — transactions hold connections for their entire duration. Slow queries block connection release.
- Serverless cold starts — each Lambda/Vercel function invocation is a separate process. Without a connection pooler, each invocation opens a fresh pool regardless of other invocations.
Fix 1: Use a Singleton PrismaClient
The most important fix — create one PrismaClient instance shared across the application:
// lib/prisma.ts — singleton pattern
import { PrismaClient } from '@prisma/client';
const globalForPrisma = global as unknown as { prisma: PrismaClient };
export const prisma =
globalForPrisma.prisma ??
new PrismaClient({
log: process.env.NODE_ENV === 'development'
? ['query', 'error', 'warn']
: ['error'],
});
if (process.env.NODE_ENV !== 'production') {
// In development, Next.js hot reloads clear module cache
// Store on global to survive hot reload without creating new instances
globalForPrisma.prisma = prisma;
}
export default prisma;// usage — always import the singleton
import { prisma } from '@/lib/prisma';
export async function getUsers() {
return prisma.user.findMany();
}Verify only one instance exists:
// Add this to diagnose multiple instances during development
let instanceCount = 0;
export const prisma = new PrismaClient({
log: [{ emit: 'event', level: 'query' }],
});
instanceCount++;
console.log(`PrismaClient instance #${instanceCount} created`);
// If you see more than 1, you have multiple instancesFix 2: Configure Pool Size
Tune the connection pool size to match your workload and database limits:
// Set pool size in PrismaClient constructor
export const prisma = new PrismaClient({
datasources: {
db: {
url: process.env.DATABASE_URL,
},
},
});# Or via DATABASE_URL connection string parameters:
# ?connection_limit=5&pool_timeout=20
DATABASE_URL="postgresql://user:pass@host:5432/db?connection_limit=5&pool_timeout=20"# Connection string parameters:
# connection_limit — max connections in the pool (default: num_cpus * 2 + 1)
# pool_timeout — seconds to wait for a connection before timing out (default: 10)
# connect_timeout — seconds to wait for initial connection (default: 5)
# socket_timeout — seconds to wait for query result (default: none)Calculate the right pool size:
Recommended formula for shared databases:
connection_limit = (max_db_connections / num_app_instances) - reserved_for_migrations
Example:
PostgreSQL max_connections = 100
App instances = 4
Reserved = 5 (for migrations, admin, monitoring)
Per instance: (100 - 5) / 4 = ~23 connections// Dynamically set based on environment
const connectionLimit = process.env.NODE_ENV === 'production'
? parseInt(process.env.DB_POOL_SIZE ?? '10')
: 5; // Smaller pool in development
const databaseUrl = `${process.env.DATABASE_URL}?connection_limit=${connectionLimit}&pool_timeout=30`;
export const prisma = new PrismaClient({
datasources: { db: { url: databaseUrl } },
});Fix 3: Fix Serverless Deployments
Serverless functions create a new process per invocation, each with its own connection pool. Use an external connection pooler:
Option 1 — PgBouncer (for PostgreSQL):
# Connect Prisma to PgBouncer instead of PostgreSQL directly
# PgBouncer maintains a persistent pool, functions connect to it
DATABASE_URL="postgresql://user:pass@pgbouncer-host:6432/db?pgbouncer=true&connection_limit=1"
# connection_limit=1 because PgBouncer handles pooling externally
# pgbouncer=true disables Prisma features incompatible with transaction pooling mode// With pgbouncer=true, Prisma disables:
// - Prepared statements (incompatible with transaction pooling mode)
// - Some advisory lock features
// This is expected behaviorOption 2 — Prisma Accelerate (Prisma’s hosted connection pooler):
# Replace DATABASE_URL with Prisma Accelerate URL
DATABASE_URL="prisma://accelerate.prisma-data.net/?api_key=YOUR_KEY"
# No code changes needed — Prisma Accelerate handles connection pooling
# Supports all Prisma featuresOption 3 — Reduce pool size per function:
// serverless function (Lambda, Vercel, Cloudflare Workers)
// Set connection_limit=1 or 2 — fewer connections per invocation
// The database handles more concurrent connections with lower per-invocation usage
const prisma = new PrismaClient({
datasources: {
db: {
url: `${process.env.DATABASE_URL}?connection_limit=1&connect_timeout=15`,
},
},
});Option 4 — Reuse PrismaClient across warm invocations:
// Lambda — keep PrismaClient alive between warm invocations
let prismaInstance: PrismaClient | null = null;
function getPrismaClient(): PrismaClient {
if (!prismaInstance) {
prismaInstance = new PrismaClient();
}
return prismaInstance;
}
export async function handler(event: APIGatewayEvent) {
const prisma = getPrismaClient(); // Reuses existing instance on warm start
// ...
}Fix 4: Prevent Connection Leaks
Connections leak when they’re not returned to the pool after use:
// Transactions — always handle errors to release the connection
// WRONG — error inside transaction may leak the connection
async function createUserWithProfile(data: CreateUserData) {
await prisma.$transaction(async (tx) => {
const user = await tx.user.create({ data: data.user });
throw new Error('Oops'); // Transaction rolled back, connection released — OK
// Prisma handles this correctly — transactions auto-rollback on throw
});
}
// Long-running transactions — avoid holding connections too long
// WRONG — fetching data inside transaction when it could be outside
async function processLargeDataset() {
await prisma.$transaction(async (tx) => {
const items = await tx.item.findMany(); // Holds connection while processing
for (const item of items) {
await expensiveExternalApiCall(item); // External call inside transaction!
await tx.item.update({ where: { id: item.id }, data: { processed: true } });
}
});
}
// CORRECT — do external work outside the transaction
async function processLargeDataset() {
const items = await prisma.item.findMany({ where: { processed: false } });
for (const item of items) {
const result = await expensiveExternalApiCall(item); // Outside transaction
// Short transaction just for the update
await prisma.item.update({
where: { id: item.id },
data: { processed: true, result },
});
}
}Explicit disconnect in scripts and tests:
// CLI scripts and one-off jobs — disconnect when done
async function main() {
try {
await runMigration();
} finally {
await prisma.$disconnect(); // Always disconnect when script ends
}
}
main().catch(console.error);
// Jest — disconnect after all tests
afterAll(async () => {
await prisma.$disconnect();
});Fix 5: Monitor Connection Pool Usage
Track active connections to diagnose pool exhaustion:
// Add metrics to PrismaClient
export const prisma = new PrismaClient({
log: [
{ emit: 'event', level: 'query' },
{ emit: 'event', level: 'error' },
],
});
// Log slow queries (> 1 second)
prisma.$on('query', (e) => {
if (e.duration > 1000) {
console.warn(`Slow query (${e.duration}ms): ${e.query}`);
}
});Check active connections at the database level (PostgreSQL):
-- See all active connections to your database
SELECT
pid,
usename,
application_name,
client_addr,
state,
wait_event_type,
wait_event,
query_start,
query
FROM pg_stat_activity
WHERE datname = 'your_database_name'
ORDER BY query_start;
-- Count connections by state
SELECT state, count(*)
FROM pg_stat_activity
WHERE datname = 'your_database_name'
GROUP BY state;
-- Idle connections holding pool slots
SELECT count(*) AS idle_connections
FROM pg_stat_activity
WHERE datname = 'your_database_name'
AND state = 'idle';Set statement timeout to prevent long-running queries from holding connections:
# DATABASE_URL with statement timeout
DATABASE_URL="postgresql://user:pass@host:5432/db?statement_timeout=30000"
# Queries taking more than 30 seconds are automatically cancelledFix 6: Handle Pool Timeout Errors Gracefully
Implement retry logic for pool timeout errors:
import { PrismaClientKnownRequestError } from '@prisma/client/runtime/library';
async function withRetry<T>(
fn: () => Promise<T>,
maxRetries = 3,
delayMs = 500,
): Promise<T> {
for (let attempt = 1; attempt <= maxRetries; attempt++) {
try {
return await fn();
} catch (error) {
const isPoolTimeout =
error instanceof PrismaClientKnownRequestError &&
error.message.includes('Timed out fetching a new connection');
if (isPoolTimeout && attempt < maxRetries) {
const delay = delayMs * Math.pow(2, attempt - 1); // Exponential backoff
console.warn(`Pool timeout, retrying in ${delay}ms (attempt ${attempt}/${maxRetries})`);
await new Promise(resolve => setTimeout(resolve, delay));
continue;
}
throw error;
}
}
throw new Error('Max retries exceeded');
}
// Usage
const users = await withRetry(() => prisma.user.findMany());Still Not Working?
Prisma in Next.js App Router — each server component and API route in Next.js App Router runs in a separate module scope in development. Use the global singleton pattern (Fix 1) to prevent creating a new PrismaClient on every request in development hot reloads.
connection_limit vs database max_connections — connection_limit in Prisma’s URL sets the pool size per PrismaClient instance. The database’s max_connections is the total across all connections. If connection_limit * instances > max_connections, you’ll see “too many clients” errors.
Read replicas — if using read replicas, each connection string needs its own pool configuration. Prisma’s datasourceUrl override in queries can route read queries to replicas.
For related issues, see Fix: Prisma Transaction Error and Fix: Prisma N+1 Query Problem.
Solo developer based in Japan. Every solution is cross-referenced with official documentation and tested before publishing.
Was this article helpful?
Related Articles
Fix: Prisma Transaction Error — Transaction Already Closed or Rolled Back
How to fix Prisma transaction errors — interactive transactions vs $transaction array, error handling and rollback, nested transactions, timeout issues, and isolation levels.
Fix: Drizzle ORM Not Working — Schema Out of Sync, Relation Query Fails, or Migration Error
How to fix Drizzle ORM issues — schema definition, drizzle-kit push vs migrate, relation queries with, transactions, type inference, and common PostgreSQL/MySQL configuration problems.
Fix: Prisma Enum Not Working — Invalid Enum Value or Enum Not Recognized
How to fix Prisma enum errors — schema definition, database sync, TypeScript enum type mismatch, filtering by enum, and migrating existing enum values.
Fix: MongoDB Schema Validation Error — Document Failed Validation
How to fix MongoDB schema validation errors — $jsonSchema rules, required fields, type mismatches, enum constraints, bypassing validation for migrations, and Mongoose schema conflicts.