Skip to content

Fix: AWS EC2 SSH Connection Refused or Timed Out

FixDevs ·

Quick Answer

How to fix AWS EC2 SSH connection refused or timed out errors — security group rules, key pair issues, sshd not running, wrong username, and network ACL misconfigurations.

The Error

You try to SSH into an EC2 instance and get:

ssh: connect to host ec2-12-34-56-78.compute-1.amazonaws.com port 22: Connection refused

Or:

ssh: connect to host 12.34.56.78 port 22: Operation timed out

Or:

[email protected]: Permission denied (publickey).

Or after connecting briefly, the connection drops:

packet_write_wait: Connection to 12.34.56.78 port 22: Broken pipe

Why This Happens

SSH to EC2 fails for several distinct reasons with different symptoms:

  • Connection refused (immediate): port 22 is reachable but nothing is listening — sshd is not running, or the instance is still booting.
  • Connection timed out (after ~75 seconds): the packet never reaches the instance — security group, network ACL, or routing issue.
  • Permission denied (publickey): connected successfully but authentication failed — wrong key, wrong username, or authorized_keys corrupted.
  • No route to host: the instance has no public IP, is in a private subnet without a NAT/bastion, or the VPC routing is broken.

Fix 1: Check and Fix Security Group Rules

The most common cause of timeout: the security group blocking inbound SSH traffic on port 22.

Check security group rules in AWS Console:

  1. EC2 Console → Instances → select your instance.
  2. Security tab → click the Security Group link.
  3. Inbound rules tab → look for a rule allowing port 22.

Required rule:

TypeProtocolPortSource
SSHTCP22Your IP (x.x.x.x/32) or 0.0.0.0/0

Add the rule via AWS CLI:

# Get your current public IP
MY_IP=$(curl -s https://checkip.amazonaws.com)

# Add SSH rule for your IP only
aws ec2 authorize-security-group-ingress \
  --group-id sg-xxxxxxxxxxxxxxxxx \
  --protocol tcp \
  --port 22 \
  --cidr "${MY_IP}/32"

Find the security group ID:

aws ec2 describe-instances \
  --instance-ids i-xxxxxxxxxxxxxxxxx \
  --query "Reservations[0].Instances[0].SecurityGroups[*].GroupId" \
  --output text

Warning: Using 0.0.0.0/0 (all IPs) as the SSH source is a security risk — it exposes your instance to brute-force attacks from the internet. Always restrict SSH to your specific IP or a bastion host IP. Use x.x.x.x/32 (single IP) rather than a wide range.

Fix 2: Verify the Instance Has a Public IP and Is Running

# Check instance state and public IP
aws ec2 describe-instances \
  --instance-ids i-xxxxxxxxxxxxxxxxx \
  --query "Reservations[0].Instances[0].{State:State.Name,PublicIP:PublicIpAddress,PublicDNS:PublicDnsName}" \
  --output table
  • State must be “running” — a stopped or stopping instance cannot accept SSH.
  • PublicIP must not be null — instances in private subnets without an Elastic IP or public IP assignment cannot be reached directly from the internet.

If there is no public IP:

Option A: Associate an Elastic IP:

# Allocate an Elastic IP
aws ec2 allocate-address --domain vpc

# Associate it with the instance
aws ec2 associate-address \
  --instance-id i-xxxxxxxxxxxxxxxxx \
  --allocation-id eipalloc-xxxxxxxxxxxxxxxxx

Option B: Connect via a bastion host or AWS Systems Manager Session Manager (no public IP needed).

Fix 3: Use the Correct Username

Each AMI has a default SSH username. Using the wrong one gives Permission denied (publickey) even with the correct key:

AMI / DistributionDefault Username
Amazon Linux / Amazon Linux 2ec2-user
Amazon Linux 2023ec2-user
Ubuntuubuntu
Debianadmin or debian
CentOScentos
RHELec2-user or root
Fedorafedora
SUSEec2-user or root
# Amazon Linux
ssh -i ~/.ssh/my-key.pem [email protected]

# Ubuntu
ssh -i ~/.ssh/my-key.pem [email protected]

# Debian
ssh -i ~/.ssh/my-key.pem [email protected]

Check the AMI details to confirm:

aws ec2 describe-images \
  --image-ids ami-xxxxxxxxxxxxxxxxx \
  --query "Images[0].{Name:Name,Description:Description}" \
  --output table

Fix 4: Fix Key Pair Issues

Wrong key file:

# Specify the key explicitly
ssh -i /path/to/correct-key.pem [email protected]

# Or add to SSH config
# ~/.ssh/config
Host my-ec2
    HostName 12.34.56.78
    User ec2-user
    IdentityFile ~/.ssh/my-key.pem

Wrong permissions on the key file:

SSH refuses keys with insecure permissions:

chmod 400 ~/.ssh/my-key.pem
# Or: chmod 600 ~/.ssh/my-key.pem

If permissions are wrong, SSH shows:

WARNING: UNPROTECTED PRIVATE KEY FILE!
Permissions 0644 for 'my-key.pem' are too open.

Verify key pair matches the instance:

# Get the key pair name associated with the instance
aws ec2 describe-instances \
  --instance-ids i-xxxxxxxxxxxxxxxxx \
  --query "Reservations[0].Instances[0].KeyName" \
  --output text

Make sure this matches the key file you are using.

Fix 5: Fix Connection Timeouts — Check Network ACLs and Routing

Security groups are stateful (allow return traffic automatically). Network ACLs are stateless — they need both inbound and outbound rules. If your VPC has a custom Network ACL, it may be blocking SSH:

Check Network ACL rules in AWS Console:

VPC Console → Network ACLs → select the ACL associated with your subnet → Inbound/Outbound rules.

Required Network ACL rules for SSH:

DirectionRule #TypeProtocolPortSourceAction
Inbound100SSHTCP220.0.0.0/0ALLOW
Outbound100Custom TCPTCP1024-655350.0.0.0/0ALLOW

The outbound rule for ephemeral ports (1024–65535) is required because SSH responses go back on a random high port — Network ACLs do not track connection state.

Check route table:

VPC Console → Route Tables → select the route table for your subnet → Routes tab. There should be an entry:

0.0.0.0/0 → igw-xxxxxxxxxxxxxxxxx  (Internet Gateway)

Without an internet gateway route, traffic cannot leave or enter the subnet from the internet.

Fix 6: Fix sshd Not Running on the Instance

If the connection is refused immediately (not timed out), sshd is likely not running. Use EC2 Instance Connect or Systems Manager Session Manager to connect without SSH and fix it:

Using EC2 Instance Connect (browser-based terminal):

  1. EC2 Console → select instance → Connect button → EC2 Instance Connect tab → Connect.

This works even when SSH is broken, as long as the security group allows port 22 from AWS IP ranges (or use Session Manager which needs no inbound ports).

Using AWS Systems Manager Session Manager:

# Install Session Manager plugin locally
# Then:
aws ssm start-session --target i-xxxxxxxxxxxxxxxxx

Session Manager works without any open inbound ports — it connects via the AWS network.

Once connected, fix sshd:

# Check sshd status
sudo systemctl status sshd

# Start sshd if stopped
sudo systemctl start sshd
sudo systemctl enable sshd

# Check sshd config for errors
sudo sshd -t

# Restart after config changes
sudo systemctl restart sshd

Fix 7: Fix Intermittent Disconnections (Broken Pipe)

SSH connections that drop after a period of inactivity are caused by idle timeout settings on the instance or network:

Fix on the client side — keep-alive in ~/.ssh/config:

Host *
    ServerAliveInterval 60
    ServerAliveCountMax 3

This sends a keep-alive packet every 60 seconds, preventing the connection from being dropped by firewalls or NAT gateways that expire idle connections.

Fix on the server side — edit /etc/ssh/sshd_config:

sudo nano /etc/ssh/sshd_config

Add or update:

ClientAliveInterval 60
ClientAliveCountMax 3
TCPKeepAlive yes
sudo systemctl restart sshd

AWS NAT Gateway idle timeout: NAT Gateways drop idle TCP connections after 350 seconds. If you SSH through a NAT Gateway (e.g., from a bastion in a public subnet to instances in a private subnet), the keep-alive settings above prevent this.

Debug Checklist

Run through these in order:

# 1. Can you reach the instance at all?
ping 12.34.56.78

# 2. Is port 22 open and responding?
nc -zv 12.34.56.78 22
# Expected: Connection to 12.34.56.78 22 port [tcp/ssh] succeeded!

# 3. Verbose SSH output
ssh -vvv -i ~/.ssh/my-key.pem [email protected]
# Shows exactly where the connection fails

# 4. Try a different key explicitly
ssh -i ~/.ssh/other-key.pem -o IdentitiesOnly=yes [email protected]

The verbose output from ssh -vvv shows each step: DNS resolution, TCP connection, SSH handshake, authentication. The step where it fails points to the root cause.

Still Not Working?

Check instance system log for boot errors. If the instance is running but sshd crashed during boot, the system log shows the error:

aws ec2 get-console-output --instance-id i-xxxxxxxxxxxxxxxxx --output text

Check if the disk is full. A full root disk prevents sshd from creating the socket file. Connect via Session Manager and run df -h.

Check if authorized_keys was corrupted. If you can connect via Session Manager, verify:

cat ~/.ssh/authorized_keys
chmod 700 ~/.ssh
chmod 600 ~/.ssh/authorized_keys

For EBS-backed instances where you lost the key, detach the root volume, attach it to another instance, mount it, and fix authorized_keys — then reattach it to the original instance.

For general SSH issues not specific to AWS, see Fix: SSH Connection Timed Out and Fix: SSH Permission Denied (publickey).

F

FixDevs

Solo developer based in Japan. Every solution is cross-referenced with official documentation and tested before publishing.

Was this article helpful?

Related Articles