Skip to content

Fix: AWS RDS Connection Timed Out from Lambda or EC2

FixDevs ·

Quick Answer

How to fix AWS RDS connection timeout errors from Lambda functions and EC2 instances — security group configuration, VPC settings, connection pooling, and RDS Proxy setup for Lambda.

The Error

A Lambda function or EC2 instance fails to connect to RDS with:

Error: connect ETIMEDOUT 10.0.1.45:5432

Or:

SequelizeConnectionError: connect ETIMEDOUT
Error: Connection timeout expired (MySQL: ETIME)
sqlalchemy.exc.OperationalError: (psycopg2.OperationalError) could not connect to server: Connection timed out

Or the Lambda function times out without a database error — it just hangs until the function’s own timeout is reached:

Task timed out after 30.00 seconds

Why This Happens

An RDS connection timeout almost always means the connection attempt is being dropped at the network layer — not a database authentication issue. The request never reaches RDS. Common causes:

  • Security group misconfiguration — the RDS security group does not allow inbound traffic from the Lambda or EC2 security group on the database port (5432 for PostgreSQL, 3306 for MySQL).
  • Lambda not in the same VPC — Lambda functions outside the VPC cannot reach RDS in a private subnet. Lambda must be configured to run inside the VPC.
  • RDS in a private subnet with no VPC endpoint — Lambda in the VPC can reach RDS in a private subnet, but internet-routed traffic cannot.
  • Lambda in public subnet without NAT — Lambda in a public subnet loses internet access when placed inside a VPC. RDS is typically in a private subnet — use private subnets for Lambda too.
  • Too many connections — Lambda scales horizontally; hundreds of concurrent invocations each opening a database connection can exhaust RDS’s connection limit, causing new connection attempts to hang.
  • RDS is stopped or in an unavailable state — check the RDS console.

Fix 1: Configure Security Groups Correctly

The most common cause. The RDS security group must explicitly allow inbound traffic from your Lambda or EC2:

Check the RDS security group:

  1. Go to AWS Console → RDS → Databases → click your DB instance.
  2. Under “Connectivity & security”, find the VPC security group.
  3. Click the security group → Inbound rules.
  4. Verify there is a rule allowing the database port from your compute resource.

Correct inbound rule on the RDS security group:

TypeProtocolPortSource
Custom TCPTCP5432sg-xxxxxxxxx (Lambda’s security group)
Custom TCPTCP3306sg-xxxxxxxxx (EC2’s security group)

Using AWS CLI to add the rule:

# Allow Lambda's security group (sg-lambda-id) to reach RDS on port 5432
aws ec2 authorize-security-group-ingress \
  --group-id sg-rds-id \
  --protocol tcp \
  --port 5432 \
  --source-group sg-lambda-id \
  --region us-east-1

Pro Tip: Reference security groups as the source (not IP ranges) for compute resources in the same VPC. Security group references automatically update when instances are added or replaced — no IP management needed.

Do not use 0.0.0.0/0 as the source for RDS inbound rules. This opens your database to the entire internet. Always restrict to specific security groups or CIDR ranges.

Fix 2: Place Lambda in the Same VPC as RDS

Lambda functions are not in your VPC by default. To reach RDS in a private subnet, Lambda must be configured to run inside the VPC:

Configure VPC in the Lambda console:

  1. Lambda → Functions → your function → Configuration → VPC.
  2. Click Edit.
  3. Select the same VPC as your RDS instance.
  4. Select private subnets (same AZs as RDS — use at least 2 for availability).
  5. Select or create a security group for the Lambda function.
  6. Save.

Using AWS CLI:

aws lambda update-function-configuration \
  --function-name my-function \
  --vpc-config SubnetIds=subnet-private-1a,subnet-private-1b,SecurityGroupIds=sg-lambda-id

Using AWS CDK (TypeScript):

import * as lambda from 'aws-cdk-lib/aws-lambda';
import * as ec2 from 'aws-cdk-lib/aws-ec2';

const vpc = ec2.Vpc.fromLookup(this, 'VPC', { vpcId: 'vpc-xxxxxxxx' });

const lambdaFn = new lambda.Function(this, 'MyFunction', {
  runtime: lambda.Runtime.NODEJS_20_X,
  handler: 'index.handler',
  code: lambda.Code.fromAsset('src'),
  vpc,
  vpcSubnets: { subnetType: ec2.SubnetType.PRIVATE_WITH_EGRESS },
  securityGroups: [lambdaSecurityGroup],
});

Using Terraform:

resource "aws_lambda_function" "my_function" {
  function_name = "my-function"
  handler       = "index.handler"
  runtime       = "nodejs20.x"

  vpc_config {
    subnet_ids         = [aws_subnet.private_1a.id, aws_subnet.private_1b.id]
    security_group_ids = [aws_security_group.lambda.id]
  }
}

Note: Placing Lambda in a VPC increases cold start time by 1–10 seconds (first invocation in a new execution environment). Subsequent invocations in the same execution environment are not affected.

Fix 3: Fix Lambda Internet Access After VPC Placement

When Lambda is placed in a VPC with private subnets, it loses internet access. If your Lambda also needs to call external APIs (S3, DynamoDB, external services), you need one of:

Option A — NAT Gateway (for internet access):

Lambda (private subnet) → NAT Gateway (public subnet) → Internet Gateway → Internet
# Create NAT Gateway in a public subnet
aws ec2 create-nat-gateway \
  --subnet-id subnet-public-1a \
  --allocation-id eipalloc-xxxxxxxxx

# Update private subnet route table to use NAT Gateway for 0.0.0.0/0
aws ec2 create-route \
  --route-table-id rtb-private \
  --destination-cidr-block 0.0.0.0/0 \
  --nat-gateway-id nat-xxxxxxxxx

Option B — VPC Endpoints (for AWS services, no NAT needed):

For accessing AWS services (S3, DynamoDB, Secrets Manager, SSM) without internet:

# Create VPC endpoint for S3 (Gateway endpoint — free)
aws ec2 create-vpc-endpoint \
  --vpc-id vpc-xxxxxxxxx \
  --service-name com.amazonaws.us-east-1.s3 \
  --route-table-ids rtb-private

# Create VPC endpoint for Secrets Manager (Interface endpoint — costs money)
aws ec2 create-vpc-endpoint \
  --vpc-id vpc-xxxxxxxxx \
  --vpc-endpoint-type Interface \
  --service-name com.amazonaws.us-east-1.secretsmanager \
  --subnet-ids subnet-private-1a subnet-private-1b \
  --security-group-ids sg-lambda-id

Fix 4: Fix Connection Pool Exhaustion

Lambda scales to hundreds of concurrent invocations. Each invocation opening its own database connection quickly exhausts RDS’s connection limit:

Check RDS connection limits:

-- PostgreSQL
SHOW max_connections;
SELECT count(*) FROM pg_stat_activity;

-- MySQL
SHOW VARIABLES LIKE 'max_connections';
SHOW STATUS LIKE 'Threads_connected';

Fix — use RDS Proxy (recommended for Lambda):

RDS Proxy pools and reuses database connections across Lambda invocations:

# Create RDS Proxy via CLI
aws rds create-db-proxy \
  --db-proxy-name my-rds-proxy \
  --engine-family POSTGRESQL \
  --auth '[{"AuthScheme":"SECRETS","SecretArn":"arn:aws:secretsmanager:us-east-1:123456789:secret:rds-credentials","IAMAuth":"DISABLED"}]' \
  --role-arn arn:aws:iam::123456789:role/rds-proxy-role \
  --vpc-subnet-ids subnet-private-1a subnet-private-1b \
  --vpc-security-group-ids sg-rds-proxy-id

After creating the proxy, update Lambda to connect to the proxy endpoint instead of the RDS endpoint:

// Lambda — connect to RDS Proxy endpoint
const { Pool } = require('pg');

const pool = new Pool({
  host: process.env.DB_PROXY_ENDPOINT, // e.g., my-rds-proxy.proxy-xxxx.us-east-1.rds.amazonaws.com
  port: 5432,
  database: process.env.DB_NAME,
  user: process.env.DB_USER,
  password: process.env.DB_PASSWORD,
  max: 1,      // Lambda should use 1 connection per invocation — Proxy handles pooling
  ssl: { rejectUnauthorized: true },
});

exports.handler = async (event) => {
  const client = await pool.connect();
  try {
    const result = await client.query('SELECT * FROM users LIMIT 10');
    return { statusCode: 200, body: JSON.stringify(result.rows) };
  } finally {
    client.release();
  }
};

Fix 5: Reuse Database Connections Across Lambda Invocations

Lambda reuses execution environments for warm invocations. Initialize the database connection outside the handler to reuse it:

// index.js — connection initialized once, reused across warm invocations
const { Pool } = require('pg');

// Outside the handler — initialized once per execution environment
let pool;

function getPool() {
  if (!pool) {
    pool = new Pool({
      host: process.env.DB_HOST,
      port: 5432,
      database: process.env.DB_NAME,
      user: process.env.DB_USER,
      password: process.env.DB_PASSWORD,
      max: 1,
      idleTimeoutMillis: 30000,
      connectionTimeoutMillis: 5000, // Fail fast instead of hanging
      ssl: { rejectUnauthorized: false },
    });
  }
  return pool;
}

exports.handler = async (event) => {
  const client = await getPool().connect();
  try {
    const result = await client.query('SELECT NOW()');
    return { statusCode: 200, body: JSON.stringify(result.rows) };
  } finally {
    client.release(); // Release back to pool — not close()
  }
};

Common Mistake: Calling pool.end() or client.end() inside the handler. This closes the connection after every invocation — the next invocation must reconnect. Call client.release() to return the connection to the pool.

Fix 6: Store and Retrieve RDS Credentials Securely

Using hardcoded credentials or environment variables in plaintext is a security risk. Use AWS Secrets Manager:

const { SecretsManagerClient, GetSecretValueCommand } = require('@aws-sdk/client-secrets-manager');
const { Pool } = require('pg');

const secretsClient = new SecretsManagerClient({ region: 'us-east-1' });
let pool;

async function getPool() {
  if (pool) return pool;

  const response = await secretsClient.send(
    new GetSecretValueCommand({ SecretId: process.env.DB_SECRET_ARN })
  );

  const { username, password, host, port, dbname } = JSON.parse(response.SecretString);

  pool = new Pool({ host, port, database: dbname, user: username, password, max: 1 });
  return pool;
}

exports.handler = async (event) => {
  const p = await getPool();
  const client = await p.connect();
  try {
    const result = await client.query('SELECT * FROM orders LIMIT 5');
    return { statusCode: 200, body: JSON.stringify(result.rows) };
  } finally {
    client.release();
  }
};

Still Not Working?

Test connectivity from within the VPC. SSH into an EC2 instance in the same VPC and subnet as Lambda, then test the RDS connection:

# Test TCP connectivity (no database client needed)
nc -zv your-rds-endpoint.rds.amazonaws.com 5432

# Or use telnet
telnet your-rds-endpoint.rds.amazonaws.com 5432

# If this times out, the issue is network/security group — not the application

Check RDS status. A stopped or failing RDS instance rejects all connections:

aws rds describe-db-instances \
  --db-instance-identifier my-db \
  --query 'DBInstances[0].DBInstanceStatus'
# Should return "available"

Check the RDS subnet group. The DB subnet group must include subnets in the same AZs as your Lambda subnets for cross-AZ routing to work.

Increase the Lambda timeout. If RDS is under heavy load, connections can take several seconds to establish. The Lambda default timeout is 3 seconds — increase it to at least 30 seconds for database workloads:

aws lambda update-function-configuration \
  --function-name my-function \
  --timeout 30

Set a connection timeout in your client. Without a connection timeout, a blocked connection causes Lambda to hang until its function timeout:

// Always set a connection timeout shorter than the Lambda timeout
const pool = new Pool({
  connectionTimeoutMillis: 5000, // Fail after 5 seconds instead of hanging
  // ...
});

For related AWS issues, see Fix: AWS Lambda Timeout and Fix: AWS EC2 SSH Connection Refused.

F

FixDevs

Solo developer based in Japan. Every solution is cross-referenced with official documentation and tested before publishing.

Was this article helpful?

Related Articles