Skip to content

Fix: MySQL Too Many Connections – Error 1040

FixDevs ·

Quick Answer

How to fix the MySQL error 1040 'Too many connections' by adjusting max_connections, fixing connection leaks, and optimizing connection pooling.

The Error

You try to connect to MySQL and get:

ERROR 1040 (HY000): Too many connections

Or one of these variations in your application logs:

SQLSTATE[HY000] [1040] Too many connections
OperationalError: (1040, 'Too many connections')
Error: ER_CON_COUNT_ERROR: Too many connections
com.mysql.cj.jdbc.exceptions.CommunicationsException: Too many connections

All of these mean the same thing: every available connection slot on the MySQL server is occupied. The server is running and healthy, but it refuses to accept any new connections because it has already reached the configured maximum.

Why This Happens

MySQL has a hard limit on the number of simultaneous client connections it will accept. This limit is controlled by the max_connections system variable, which defaults to 151 on most installations. Once all 151 slots are in use, every new connection attempt — from your app, from your ORM, from a monitoring tool, from you running mysql on the command line — gets rejected with ERROR 1040.

Common causes:

  • The default max_connections is too low for your workload. A moderately busy web application can easily exhaust 151 connections, especially with multiple app server instances each maintaining their own connection pool.
  • Connection leaks in your application. Your code opens database connections but never closes them. Over time, leaked connections accumulate until the limit is hit.
  • No connection pooling. Your application opens a new MySQL connection for every single request or query, instead of reusing connections from a pool. Under load, this quickly saturates the limit.
  • Long-running queries or transactions. Slow queries hold their connections open for extended periods, reducing the number of connections available for other clients.
  • Idle connections that are never released. Clients connect and then sit idle without disconnecting. MySQL keeps those connections alive until the wait_timeout expires, which defaults to 28800 seconds (8 hours).
  • Too many application instances. You scaled your app horizontally — more pods, more containers, more servers — and each instance opens its own pool of connections. Ten instances with 20 connections each consume 200 slots.
  • Monitoring and administrative tools. Tools like Prometheus exporters, health checks, backup scripts, and cron jobs each use connections that count toward the limit.
  • Docker or container resource limits. Your container orchestrator may be spawning more replicas than MySQL can handle, or a restart loop causes connections to pile up before old ones time out.

Fix 1: Increase max_connections

The fastest fix is to raise the connection limit. You can do this live without restarting MySQL.

Check the current limit

If you can still connect (use the reserved +1 super-user slot):

mysql -u root -p -e "SHOW VARIABLES LIKE 'max_connections';"
+-----------------+-------+
| Variable_name   | Value |
+-----------------+-------+
| max_connections | 151   |
+-----------------+-------+

Check how many connections are currently in use

SHOW STATUS LIKE 'Threads_connected';
SHOW STATUS LIKE 'Max_used_connections';

Max_used_connections tells you the peak number of simultaneous connections since the server started. If this number equals or nearly equals max_connections, you have been hitting the limit.

Increase it live

SET GLOBAL max_connections = 500;

This takes effect immediately but does not persist across restarts.

Make it permanent

Edit your MySQL configuration file (my.cnf or my.ini):

[mysqld]
max_connections = 500

The location of this file varies by distribution:

  • Debian/Ubuntu: /etc/mysql/mysql.conf.d/mysqld.cnf
  • RHEL/CentOS: /etc/my.cnf
  • macOS (Homebrew): /opt/homebrew/etc/my.cnf
  • Windows: C:\ProgramData\MySQL\MySQL Server 8.0\my.ini

Restart MySQL after editing:

sudo systemctl restart mysql

How high should you set it?

There is no universal answer. Each connection consumes memory (roughly 1-10 MB depending on buffers, prepared statements, and sort operations). Setting max_connections = 10000 on a server with 4 GB of RAM will cause MySQL to run out of memory and crash — which is worse than rejecting connections. A common approach:

  • Small apps (single server): 200-300
  • Medium workloads: 500-1000
  • Large production systems: 1000-5000 with connection pooling at the application or proxy layer

Check your available memory and calculate based on your per-connection overhead. You can monitor per-connection memory with:

SHOW GLOBAL STATUS LIKE 'Threads_connected';
SHOW GLOBAL STATUS LIKE 'Memory_used';

Check OS-level file descriptor limits

Each MySQL connection uses a file descriptor. If the OS limits the number of file descriptors MySQL can open, increasing max_connections in MySQL has no effect.

Check the current limit for the MySQL process:

cat /proc/$(pgrep mysqld)/limits | grep "open files"

If this is too low, increase it in the systemd unit override:

sudo systemctl edit mysql

Add:

[Service]
LimitNOFILE=65535

Then reload and restart:

sudo systemctl daemon-reload
sudo systemctl restart mysql

Fix 2: Use Connection Pooling

Opening a new MySQL connection for every request is expensive and wasteful. A connection pool maintains a set of reusable connections that your application checks out and returns, keeping the total number of connections predictable and bounded.

Node.js (mysql2)

const mysql = require('mysql2/promise');

const pool = mysql.createPool({
  host: '127.0.0.1',
  user: 'app_user',
  password: 'password',
  database: 'mydb',
  waitForConnections: true,
  connectionLimit: 10,     // max connections in the pool
  queueLimit: 0,           // unlimited queue (requests wait when pool is full)
  idleTimeout: 60000,      // close idle connections after 60 seconds
  enableKeepAlive: true,
  keepAliveInitialDelay: 10000
});

// Use the pool -- connections are automatically returned
const [rows] = await pool.query('SELECT * FROM users WHERE id = ?', [userId]);

The key parameter is connectionLimit. With 10 connections per pool and 5 app instances, you use 50 MySQL connections total — well within a max_connections of 200.

A common mistake is creating a new pool (or a new connection) inside a request handler. The pool must be created once at application startup and shared across all requests. If your environment variables are not loading properly, each failed pool initialization might trigger a retry that creates duplicate pools.

Python (SQLAlchemy)

from sqlalchemy import create_engine

engine = create_engine(
    'mysql+pymysql://app_user:[email protected]/mydb',
    pool_size=10,           # number of persistent connections
    max_overflow=5,         # additional connections allowed during spikes
    pool_timeout=30,        # seconds to wait for a connection from the pool
    pool_recycle=3600,      # recycle connections after 1 hour
    pool_pre_ping=True      # test connections before using them
)

pool_size + max_overflow defines the maximum number of connections this engine will open. Plan accordingly when running multiple workers (e.g., Gunicorn with 4 workers each creating an engine means up to 4 * 15 = 60 connections).

Python (Django)

In settings.py:

DATABASES = {
    'default': {
        'ENGINE': 'django.db.backends.mysql',
        'NAME': 'mydb',
        'USER': 'app_user',
        'PASSWORD': 'password',
        'HOST': '127.0.0.1',
        'PORT': '3306',
        'CONN_MAX_AGE': 600,       # reuse connections for 10 minutes
        'CONN_HEALTH_CHECKS': True, # verify connections are alive (Django 4.1+)
    }
}

CONN_MAX_AGE=0 (the default) closes the connection after every request. Setting it to None keeps connections alive indefinitely. A value like 600 is a good balance.

Java (HikariCP)

HikariConfig config = new HikariConfig();
config.setJdbcUrl("jdbc:mysql://127.0.0.1:3306/mydb");
config.setUsername("app_user");
config.setPassword("password");
config.setMaximumPoolSize(10);      // max connections in the pool
config.setMinimumIdle(5);           // keep at least 5 idle connections
config.setIdleTimeout(300000);      // close idle connections after 5 minutes
config.setConnectionTimeout(30000); // wait up to 30 seconds for a connection
config.setMaxLifetime(1800000);     // retire connections after 30 minutes

HikariDataSource dataSource = new HikariDataSource(config);

HikariCP is the default pool in Spring Boot. If you are using Spring Boot, configure these in application.properties:

spring.datasource.hikari.maximum-pool-size=10
spring.datasource.hikari.minimum-idle=5
spring.datasource.hikari.idle-timeout=300000
spring.datasource.hikari.max-lifetime=1800000

Why this matters: Blindly increasing max_connections without adding memory is like adding more lanes to a highway without enough pavement. Each connection reserves memory for buffers, and setting the limit too high can cause MySQL to crash from OOM, which is far worse than refusing connections.

Fix 3: Find and Fix Connection Leaks

A connection leak happens when your application opens a database connection but never closes it. Over time, leaked connections pile up until MySQL hits the limit. This is the most common cause of “too many connections” in production applications.

Detect leaks with SHOW PROCESSLIST

SHOW FULL PROCESSLIST;

Look for connections in the Sleep state with high Time values:

+----+----------+-----------+------+---------+------+-------+------------------+
| Id | User     | Host      | db   | Command | Time | State | Info             |
+----+----------+-----------+------+---------+------+-------+------------------+
| 42 | app_user | app:49312 | mydb | Sleep   | 7200 | ""    | NULL             |
| 43 | app_user | app:49313 | mydb | Sleep   | 7150 | ""    | NULL             |
| 44 | app_user | app:49314 | mydb | Sleep   | 6800 | ""    | NULL             |
| 45 | app_user | app:49315 | mydb | Query   |    0 | ""    | SHOW PROCESSLIST |
+----+----------+-----------+------+---------+------+-------+------------------+

Dozens or hundreds of Sleep connections with Time values in the thousands indicate leaked connections. Healthy pooled connections also show as Sleep, but their Time values reset frequently as the pool reuses them.

Fix the leak in your code

The pattern is always the same: ensure every connection is closed in a finally block, a context manager, or equivalent cleanup mechanism.

Python:

# BAD -- connection is never closed if an exception occurs
conn = pymysql.connect(host='127.0.0.1', user='app_user', database='mydb')
cursor = conn.cursor()
cursor.execute('SELECT * FROM users')
results = cursor.fetchall()
conn.close()

# GOOD -- connection is always closed
conn = pymysql.connect(host='127.0.0.1', user='app_user', database='mydb')
try:
    cursor = conn.cursor()
    cursor.execute('SELECT * FROM users')
    results = cursor.fetchall()
finally:
    conn.close()

# BEST -- use a context manager
with pymysql.connect(host='127.0.0.1', user='app_user', database='mydb') as conn:
    with conn.cursor() as cursor:
        cursor.execute('SELECT * FROM users')
        results = cursor.fetchall()

Node.js:

// BAD -- connection is never released back to the pool
const conn = await pool.getConnection();
const [rows] = await conn.query('SELECT * FROM users');
// forgot conn.release()

// GOOD -- always release in a finally block
const conn = await pool.getConnection();
try {
  const [rows] = await conn.query('SELECT * FROM users');
  return rows;
} finally {
  conn.release();
}

// BEST -- use pool.query() which handles acquire/release automatically
const [rows] = await pool.query('SELECT * FROM users');

Java:

// GOOD -- try-with-resources guarantees close
try (Connection conn = dataSource.getConnection();
     PreparedStatement stmt = conn.prepareStatement("SELECT * FROM users")) {
    ResultSet rs = stmt.executeQuery();
    // process results
}
// conn is automatically returned to the pool here

Fix 4: Reduce wait_timeout and interactive_timeout

MySQL keeps idle connections alive based on two timeout values. Lowering them causes MySQL to close abandoned connections sooner, freeing up slots.

Check current values

SHOW VARIABLES LIKE 'wait_timeout';
SHOW VARIABLES LIKE 'interactive_timeout';

The default for both is 28800 seconds (8 hours). That means a leaked or abandoned connection occupies a slot for up to 8 hours.

Lower the timeouts

SET GLOBAL wait_timeout = 300;
SET GLOBAL interactive_timeout = 300;

This closes idle connections after 5 minutes. Make it permanent in my.cnf:

[mysqld]
wait_timeout = 300
interactive_timeout = 300

Be careful not to set these too low. If your connection pool keeps connections idle for longer than wait_timeout, MySQL closes them behind the pool’s back, and the next query on a stale connection fails with “MySQL server has gone away.” Your connection pool should have a pool_recycle or keepAlive interval shorter than wait_timeout to prevent this.

Fix 5: Kill Idle Connections

When you are locked out and need to free up connections immediately, you can kill idle connections manually.

Find idle connections

SELECT id, user, host, db, command, time, state
FROM information_schema.processlist
WHERE command = 'Sleep'
AND time > 300
ORDER BY time DESC;

This lists all connections that have been idle for more than 5 minutes.

Kill them one at a time

KILL 42;
KILL 43;
KILL 44;

Kill them in bulk

Generate kill statements for all connections idle for more than 5 minutes:

SELECT CONCAT('KILL ', id, ';')
FROM information_schema.processlist
WHERE command = 'Sleep'
AND time > 300;

Copy the output and run it. Or use a script:

mysql -u root -p -e "SELECT id FROM information_schema.processlist WHERE command='Sleep' AND time > 300" --skip-column-names | while read id; do
  mysql -u root -p -e "KILL $id"
done

Emergency access when completely locked out

MySQL reserves one extra connection beyond max_connections for users with the SUPER privilege (or CONNECTION_ADMIN in MySQL 8.0+). If you are completely locked out, connect as root:

mysql -u root -p

This uses the reserved slot. From there, kill idle connections or increase max_connections.

Fix 6: Use a Connection Proxy (ProxySQL)

When you have many application instances and cannot easily coordinate their connection pool sizes, a connection proxy sits between your apps and MySQL. It multiplexes hundreds of application connections onto a smaller number of actual MySQL connections.

ProxySQL

ProxySQL is the most widely used MySQL connection proxy. Install it:

# Debian/Ubuntu
sudo apt-get install proxysql

# RHEL/CentOS
sudo yum install proxysql

Configure the backend MySQL server:

-- Connect to ProxySQL admin interface (default port 6032)
mysql -u admin -padmin -h 127.0.0.1 -P 6032

-- Add your MySQL server
INSERT INTO mysql_servers (hostgroup_id, hostname, port) VALUES (1, '127.0.0.1', 3306);

-- Configure connection multiplexing
UPDATE mysql_query_rules SET multiplex = 1 WHERE active = 1;

-- Set the max connections ProxySQL will open to MySQL
UPDATE mysql_servers SET max_connections = 100 WHERE hostname = '127.0.0.1';

-- Load and save
LOAD MYSQL SERVERS TO RUNTIME;
SAVE MYSQL SERVERS TO DISK;

Point your applications to ProxySQL (default port 6033) instead of MySQL directly:

mysql://app_user:[email protected]:6033/mydb

ProxySQL accepts thousands of client connections while maintaining only 100 actual MySQL connections. It handles the multiplexing transparently.

Why this matters for Docker and Kubernetes

In containerized environments, the number of application instances can change rapidly. If your Docker Compose setup spins up multiple replicas, or Kubernetes auto-scales your pods, each new instance opens its own connection pool. A proxy layer absorbs this variability and protects MySQL from being overwhelmed.

Fix 7: Handle Docker and Container Connection Limits

Running MySQL in containers introduces specific challenges around connection management.

Container restart loops cause connection buildup

If an application container keeps crashing and restarting (for example, due to a missing environment variable or a bad configuration), each restart opens new connections before the old ones time out. Lower wait_timeout to clean up stale connections faster.

Multiple replicas exceeding the limit

If you run 10 replicas of your app, each with a pool of 20 connections, that is 200 connections. Check how many replicas are running:

docker compose ps

Or in Kubernetes:

kubectl get pods -l app=your-app

Coordinate the pool size across replicas. A simple formula:

pool_size_per_instance = max_connections / number_of_instances * 0.8

The 0.8 factor reserves 20% of connections for admin access, monitoring, and migrations.

Docker Compose health checks consuming connections

Health check scripts that connect to MySQL consume connections. If the health check runs every 10 seconds and each check opens a new connection, that is wasted capacity. Use a lightweight health check:

services:
  db:
    image: mysql:8.0
    environment:
      MYSQL_ROOT_PASSWORD: secret
    healthcheck:
      test: ["CMD", "mysqladmin", "ping", "-h", "localhost"]
      interval: 30s
      timeout: 10s
      retries: 3

mysqladmin ping is lighter than running a full SELECT 1 query through a new connection.

Common Mistake: Creating a new database connection pool inside each request handler instead of once at application startup. In a Node.js Express app, this means 100 requests/second can open 100 new pools, each with its own set of connections, exhausting MySQL within seconds.

Fix 8: Monitor and Prevent Future Issues

Once you have resolved the immediate error, set up monitoring to catch connection saturation before it causes an outage.

Key metrics to watch

-- Current connections vs. limit
SHOW STATUS LIKE 'Threads_connected';
SHOW VARIABLES LIKE 'max_connections';

-- Peak connections since last restart
SHOW STATUS LIKE 'Max_used_connections';

-- How many connections were rejected
SHOW STATUS LIKE 'Connection_errors_max_connections';

-- Aborted connections (may indicate leaks or network issues)
SHOW STATUS LIKE 'Aborted_connects';
SHOW STATUS LIKE 'Aborted_clients';

Set up alerts

Alert when Threads_connected exceeds 80% of max_connections. This gives you time to respond before users see errors.

Per-user connection limits

Prevent a single user or application from consuming all connections:

ALTER USER 'app_user'@'%' WITH MAX_USER_CONNECTIONS 50;

This limits app_user to 50 simultaneous connections. Other users and admin accounts retain access even if one application misbehaves.

Slow query log

Long-running queries hold connections open longer than necessary. Enable the slow query log to find them:

[mysqld]
slow_query_log = 1
slow_query_log_file = /var/log/mysql/slow.log
long_query_time = 2

This logs every query taking longer than 2 seconds. Review the log and optimize the worst offenders with indexes or query rewrites.

Still Not Working?

max_connections is high but connections are still refused

If max_connections is set to 1000 but MySQL still refuses connections at 200, the OS file descriptor limit is the bottleneck. Check the MySQL error log for messages like:

[Warning] Could not increase number of max_open_files to more than 1024

Follow the file descriptor steps in Fix 1 to raise the limit.

Connections spike during deployments

During a rolling deployment, old and new application instances run simultaneously, doubling the total number of connections. Handle this by:

  • Using a connection proxy that absorbs the spike.
  • Lowering the pool size so old + new instances combined stay within the limit.
  • Draining connections from old instances before new ones start.

”Too many connections” appears intermittently

Intermittent spikes often come from cron jobs, batch processes, or report generation that opens many connections simultaneously. Identify the culprit by checking SHOW PROCESSLIST during a spike — look at the User and Host columns to see what is connecting. Schedule heavy batch jobs during off-peak hours or give them a separate, limited MySQL user.

You are running out of memory after increasing max_connections

Each connection reserves memory for thread stack, sort buffer, join buffer, and read buffers. If you set max_connections = 5000 and all 5000 connect, MySQL may need several gigabytes just for connection buffers. Reduce per-connection memory by tuning:

[mysqld]
sort_buffer_size = 256K
read_buffer_size = 128K
read_rnd_buffer_size = 256K
join_buffer_size = 256K
thread_stack = 256K

These are per-connection allocations. Smaller values reduce memory usage at the cost of slightly slower complex queries.

The application reconnects successfully but data seems stale

If your connection pool hands out a connection that MySQL closed due to wait_timeout, the pool may silently reconnect but lose transaction state or session variables. Enable connection validation in your pool (pool_pre_ping in SQLAlchemy, testOnBorrow in HikariCP, enableKeepAlive in mysql2) so stale connections are detected and replaced before queries run on them.


Related: If MySQL is rejecting your credentials entirely rather than your connection count, see Fix: MySQL Access Denied for User. If the MySQL socket file is missing or inaccessible, see Fix: MySQL ERROR 2002 Can’t Connect Through Socket. For PostgreSQL unique constraint violations during high-concurrency inserts, see Fix: PostgreSQL Duplicate Key Violates Unique Constraint. If your application’s database credentials are not loading from environment variables, see Fix: Environment Variable Is Undefined.

F

FixDevs

Solo developer based in Japan. Every solution is cross-referenced with official documentation and tested before publishing.

Was this article helpful?

Related Articles