Skip to content

Fix: Python requests.exceptions.ConnectionError: Max retries exceeded

FixDevs ·

Quick Answer

How to fix Python requests ConnectionError Max retries exceeded caused by wrong URL, DNS failure, server down, SSL errors, connection pool exhaustion, and firewall blocks.

The Error

You run a Python script using the requests library and get:

requests.exceptions.ConnectionError: HTTPSConnectionPool(host='api.example.com', port=443):
Max retries exceeded with url: /endpoint
(Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x...>:
Failed to establish a new connection: [Errno -2] Name or service not known'))

Or variations:

requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8080):
Max retries exceeded with url: /api/data
(Caused by NewConnectionError('... [Errno 111] Connection refused'))
requests.exceptions.ConnectionError: HTTPSConnectionPool(host='api.example.com', port=443):
Max retries exceeded with url: /endpoint
(Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED]')))
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='...', port=443):
Max retries exceeded with url: /path

The requests library tried to connect to a server multiple times and failed every time. The underlying cause is in the parenthetical message after “Caused by.”

Why This Happens

The requests library (via urllib3) automatically retries failed connections. When all retries are exhausted, it raises ConnectionError with “Max retries exceeded.” The real error is the nested cause.

Common causes (look at the “Caused by” message):

  • Name or service not known / getaddrinfo failed — DNS resolution failed. The hostname does not exist or DNS is unreachable.
  • Connection refused — The server is not running or not listening on that port.
  • Connection timed out — The server is unreachable (firewall, wrong IP, network issue).
  • SSLError / CERTIFICATE_VERIFY_FAILED — SSL/TLS certificate verification failed.
  • Too many open files — Connection pool or file descriptor exhaustion.
  • Network is unreachable — No network connectivity at all.

Fix 1: Check the URL

The most common cause is a wrong URL:

import requests

# Wrong — typo in hostname
response = requests.get("https://api.exmple.com/data")  # "exmple" not "example"

# Wrong — HTTP vs HTTPS
response = requests.get("https://localhost:8080/api")  # Server only supports HTTP
response = requests.get("http://localhost:8080/api")   # Fixed

# Wrong — missing port
response = requests.get("http://localhost/api")   # Tries port 80, server is on 8080
response = requests.get("http://localhost:8080/api")  # Fixed

# Wrong — trailing slash matters for some APIs
response = requests.get("https://api.example.com/users")
response = requests.get("https://api.example.com/users/")  # Try with/without

Verify the URL is reachable:

# Test from the command line
curl -v https://api.example.com/endpoint
ping api.example.com
nslookup api.example.com

Pro Tip: Always print or log the full URL before making the request during development. Many “Max retries exceeded” errors come from a simple typo, an empty variable in an f-string, or a misconfigured base URL.

Fix 2: Check if the Server is Running

If the error says “Connection refused”:

# The server at localhost:8080 is not running
requests.get("http://localhost:8080/api")
# ConnectionError: ... Connection refused

Check the server:

# Is the process running?
ps aux | grep my_server

# Is something listening on the port?
ss -tlnp | grep 8080
# or
netstat -tlnp | grep 8080

# Start the server
python manage.py runserver 0.0.0.0:8080

For Docker services:

docker ps  # Check if the container is running
docker logs my-container  # Check for startup errors

Common in development: You started your client script before the server finished starting up. Add a startup delay or retry logic.

Fix 3: Add Retry Logic with Backoff

For transient network issues, add proper retry handling:

import requests
from requests.adapters import HTTPAdapter
from urllib3.util.retry import Retry

session = requests.Session()

retries = Retry(
    total=5,              # Total number of retries
    backoff_factor=1,     # Wait 1, 2, 4, 8, 16 seconds between retries
    status_forcelist=[500, 502, 503, 504],  # Retry on these HTTP status codes
    allowed_methods=["GET", "POST"],         # Which methods to retry
)

adapter = HTTPAdapter(max_retries=retries)
session.mount("http://", adapter)
session.mount("https://", adapter)

response = session.get("https://api.example.com/data", timeout=10)

Simple retry with exponential backoff:

import time
import requests

def fetch_with_retry(url, max_retries=3, timeout=10):
    for attempt in range(max_retries):
        try:
            response = requests.get(url, timeout=timeout)
            response.raise_for_status()
            return response
        except requests.exceptions.ConnectionError as e:
            if attempt < max_retries - 1:
                wait = 2 ** attempt  # 1, 2, 4 seconds
                print(f"Connection failed, retrying in {wait}s...")
                time.sleep(wait)
            else:
                raise

Common Mistake: Not setting a timeout on requests. Without a timeout, requests.get() can hang indefinitely waiting for a response. Always set both connect and read timeouts: requests.get(url, timeout=(5, 30)) — 5 seconds to connect, 30 seconds to read.

Fix 4: Fix SSL Certificate Errors

If the error contains SSLError or CERTIFICATE_VERIFY_FAILED:

# Quick fix for development — disable SSL verification
response = requests.get("https://api.example.com/data", verify=False)
# Warning: This disables ALL certificate checks — never use in production!

Proper fix — specify the CA bundle:

response = requests.get("https://api.example.com/data", verify="/path/to/ca-bundle.crt")

Fix — update certifi (Python’s CA bundle):

pip install --upgrade certifi

Fix — install system certificates:

# macOS
/Applications/Python\ 3.x/Install\ Certificates.command

# Linux
sudo apt install ca-certificates
sudo update-ca-certificates

# pip behind corporate proxy with custom CA
pip install --cert /path/to/corporate-ca.pem requests

For self-signed certificates:

# Add the self-signed cert to the trusted bundle
import certifi
import os

# Option 1: Point to your certificate
response = requests.get("https://internal-api.company.com", verify="/path/to/self-signed.crt")

# Option 2: Set environment variable
os.environ["REQUESTS_CA_BUNDLE"] = "/path/to/custom-ca-bundle.crt"

For general SSL certificate issues, see Fix: SSL certificate problem: unable to get local issuer certificate.

Fix 5: Fix DNS Resolution Issues

If the error says “Name or service not known” or “getaddrinfo failed”:

# Test DNS resolution
import socket

try:
    ip = socket.gethostbyname("api.example.com")
    print(f"Resolved to: {ip}")
except socket.gaierror as e:
    print(f"DNS resolution failed: {e}")

Common DNS fixes:

# Check DNS resolution
nslookup api.example.com
dig api.example.com

# Flush DNS cache
# Linux
sudo systemd-resolve --flush-caches
# macOS
sudo dscacheutil -flushcache; sudo killall -HUP mDNSResponder
# Windows
ipconfig /flushdns

For Docker containers — DNS might not work:

# docker-compose.yml
services:
  app:
    dns:
      - 8.8.8.8
      - 8.8.4.4

Use IP address instead of hostname as a workaround:

# If DNS is the issue, connect directly to the IP
response = requests.get("https://93.184.216.34/data",
                        headers={"Host": "api.example.com"})

Fix 6: Fix Connection Pool Exhaustion

If you make many requests in rapid succession, the connection pool can run out:

# Wrong — creates a new session for every request
for url in thousands_of_urls:
    response = requests.get(url)  # Each creates a new connection

# Fixed — reuse a session
session = requests.Session()
for url in thousands_of_urls:
    response = session.get(url)  # Reuses connections via keep-alive

Increase the pool size for concurrent requests:

from requests.adapters import HTTPAdapter

session = requests.Session()
adapter = HTTPAdapter(
    pool_connections=20,   # Number of connection pools
    pool_maxsize=20,       # Connections per pool
)
session.mount("http://", adapter)
session.mount("https://", adapter)

Close connections properly:

# Use context manager
with requests.Session() as session:
    response = session.get("https://api.example.com/data")
    # Session is closed when the block exits

Fix 7: Fix Firewall and Proxy Issues

Check if a firewall is blocking the connection:

# Test TCP connectivity
nc -zv api.example.com 443
# or
telnet api.example.com 443

Configure proxy settings:

proxies = {
    "http": "http://proxy.company.com:8080",
    "https": "http://proxy.company.com:8080",
}

response = requests.get("https://api.example.com/data", proxies=proxies)

Or set environment variables:

export HTTP_PROXY="http://proxy.company.com:8080"
export HTTPS_PROXY="http://proxy.company.com:8080"
export NO_PROXY="localhost,127.0.0.1,.internal.company.com"

Bypass proxy for local connections:

response = requests.get("http://localhost:8080/api", proxies={"http": None, "https": None})

Fix 8: Fix Timeout Issues

If the error mentions “timed out”, the server is too slow or unreachable:

# Set explicit timeouts (connect_timeout, read_timeout)
response = requests.get("https://slow-api.example.com/data", timeout=(5, 30))

# 5 seconds to establish the connection
# 30 seconds to receive the response

For very slow APIs:

response = requests.get("https://slow-api.example.com/large-export", timeout=(10, 300))
# 5 minutes read timeout for large responses

With streaming for large responses:

with requests.get("https://example.com/large-file.zip", stream=True, timeout=10) as r:
    r.raise_for_status()
    with open("large-file.zip", "wb") as f:
        for chunk in r.iter_content(chunk_size=8192):
            f.write(chunk)

Still Not Working?

Check for rate limiting. Some APIs block you after too many requests:

response = requests.get("https://api.example.com/data")
if response.status_code == 429:
    retry_after = int(response.headers.get("Retry-After", 60))
    time.sleep(retry_after)

Check for IPv6 issues. If the hostname resolves to both IPv4 and IPv6, and IPv6 is not configured properly:

# Force IPv4
import requests
from urllib3.util.connection import allowed_gai_family
import socket

# Monkey-patch to force IPv4
requests.packages.urllib3.util.connection.allowed_gai_family = lambda: socket.AF_INET

Check system resource limits:

# Check file descriptor limit
ulimit -n

# Increase if needed
ulimit -n 65536

Debug the exact connection failure:

import logging
logging.basicConfig(level=logging.DEBUG)
logging.getLogger("urllib3").setLevel(logging.DEBUG)

response = requests.get("https://api.example.com/data")
# Shows detailed connection attempts and failures

For Python import errors when installing requests, see Fix: Python ModuleNotFoundError: No module named. For general connection refused errors, see Fix: ERR_CONNECTION_REFUSED localhost.

F

FixDevs

Solo developer based in Japan. Every solution is cross-referenced with official documentation and tested before publishing.

Was this article helpful?

Related Articles