Fix: Python requests.get() Hanging — Timeout Not Working
Quick Answer
How to fix Python requests hanging forever — why requests.get() ignores timeout, how to set connect and read timeouts correctly, use session-level timeouts, and handle timeout exceptions properly.
The Error
A requests.get() call hangs indefinitely and never returns:
import requests
# This hangs forever if the server is slow or unresponsive
response = requests.get('https://api.example.com/data')Or you set a timeout but it still hangs:
response = requests.get('https://api.example.com/data', timeout=5)
# Still hangs for minutes...Or the timeout raises an exception you did not catch, crashing your program:
requests.exceptions.ReadTimeout: HTTPSConnectionPool(host='api.example.com', port=443):
Read timed out. (read timeout=5)Or in a web server context, a single hanging request blocks a worker thread and eventually causes the entire server to stop responding.
Why This Happens
By default, requests has no timeout — it will wait forever for a response. This is almost never what you want in production code.
When you do set a timeout, there are two distinct phases where a timeout can occur:
Connect timeout — how long to wait to establish the TCP connection to the server. If the server is unreachable or behind a firewall that drops packets (rather than refusing the connection), the connect phase hangs.
Read timeout — how long to wait for the server to send the first byte of the response body after the connection is established. A server that accepts the connection but is slow to respond causes a read timeout.
timeout=5 sets both to 5 seconds. But if you set timeout=(30, 5), it means 30 seconds for connect, 5 seconds for read — and the connect phase can still hang if the server silently drops packets.
Other causes:
- Large response body — the read timeout applies to the time between bytes, not the total transfer time. A slow server that trickles data can never trigger the read timeout.
- Retry logic retrying on timeout — if your code retries indefinitely on
ReadTimeout, the overall call still hangs. - DNS resolution hanging — DNS resolution happens before the connect phase and is not covered by
requests’s timeout.
Fix 1: Always Set a Timeout
The most important rule: never call requests without a timeout in production code.
import requests
# Bad — no timeout
response = requests.get('https://api.example.com/data')
# Good — 10 second timeout for both connect and read
response = requests.get('https://api.example.com/data', timeout=10)
# Best — separate connect and read timeouts
# connect: 5s to establish TCP connection
# read: 30s to wait for response bytes
response = requests.get('https://api.example.com/data', timeout=(5, 30))What timeout=(connect, read) means:
# (5, 30) means:
# - Wait up to 5 seconds to establish the TCP connection
# - Wait up to 30 seconds between bytes received from the server
# - Does NOT mean the total request must complete in 35 seconds
response = requests.get(url, timeout=(5, 30))For large file downloads, the read timeout applies per-chunk — a large download can take longer than the read timeout as long as data arrives continuously.
Fix 2: Handle Timeout Exceptions
Setting a timeout without catching the exception causes your program to crash:
import requests
from requests.exceptions import Timeout, ConnectionError, RequestException
def fetch_data(url: str) -> dict | None:
try:
response = requests.get(url, timeout=(5, 30))
response.raise_for_status() # Raises HTTPError for 4xx/5xx
return response.json()
except Timeout:
print(f"Request timed out: {url}")
return None
except ConnectionError as e:
print(f"Connection failed: {e}")
return None
except requests.HTTPError as e:
print(f"HTTP error {e.response.status_code}: {url}")
return None
except RequestException as e:
# Catch-all for any other requests error
print(f"Request failed: {e}")
return NoneDistinguish connect timeout from read timeout:
from requests.exceptions import ConnectTimeout, ReadTimeout
try:
response = requests.get(url, timeout=(5, 30))
except ConnectTimeout:
# Server is unreachable — do not retry immediately
print("Could not connect to server")
except ReadTimeout:
# Connected but server is slow — might retry with longer timeout
print("Server connected but timed out sending response")Fix 3: Use a Session with a Default Timeout
If you make many requests, configure the timeout once on a Session object:
import requests
from requests.adapters import HTTPAdapter
class TimeoutHTTPAdapter(HTTPAdapter):
def __init__(self, *args, timeout=(5, 30), **kwargs):
self.timeout = timeout
super().__init__(*args, **kwargs)
def send(self, request, **kwargs):
kwargs.setdefault('timeout', self.timeout)
return super().send(request, **kwargs)
# Create a session with a default timeout on all requests
session = requests.Session()
adapter = TimeoutHTTPAdapter(timeout=(5, 30))
session.mount('http://', adapter)
session.mount('https://', adapter)
# All requests through this session respect the timeout
response = session.get('https://api.example.com/data')
response = session.post('https://api.example.com/items', json={'name': 'test'})Simpler approach — monkey-patch the default timeout:
import requests
# Override the default send method to always include a timeout
original_send = requests.Session.send
def patched_send(self, *args, **kwargs):
kwargs.setdefault('timeout', (5, 30))
return original_send(self, *args, **kwargs)
requests.Session.send = patched_send
# Now all requests have the default timeout
requests.get('https://api.example.com/data') # Uses (5, 30) timeoutPro Tip: Use the
TimeoutHTTPAdapterpattern in library code where you cannot guarantee callers will pass a timeout. In application code, always passtimeoutexplicitly to make the behavior clear.
Fix 4: Set a Total Request Timeout with a Thread or Signal
requests’s timeout parameter does not guarantee the total time. For a hard time limit on the entire operation (including retries, redirects, and large response reading):
Using a thread with concurrent.futures:
from concurrent.futures import ThreadPoolExecutor, TimeoutError
import requests
def fetch_with_hard_timeout(url: str, total_timeout: float = 10.0) -> dict:
with ThreadPoolExecutor(max_workers=1) as executor:
future = executor.submit(
requests.get, url, timeout=(5, 30)
)
try:
response = future.result(timeout=total_timeout)
return response.json()
except TimeoutError:
future.cancel()
raise TimeoutError(f"Total request time exceeded {total_timeout}s")Using signal (Unix only — not available on Windows):
import signal
import requests
class HardTimeout(Exception):
pass
def timeout_handler(signum, frame):
raise HardTimeout("Request exceeded hard timeout")
def fetch_with_signal_timeout(url: str, hard_limit: int = 15) -> dict:
signal.signal(signal.SIGALRM, timeout_handler)
signal.alarm(hard_limit) # SIGALRM fires after hard_limit seconds
try:
response = requests.get(url, timeout=(5, 30))
response.raise_for_status()
return response.json()
except HardTimeout:
raise
finally:
signal.alarm(0) # Cancel the alarmFix 5: Use Retry Logic with Backoff
For transient timeouts, retry with exponential backoff instead of crashing:
import requests
from requests.adapters import HTTPAdapter
from urllib3.util.retry import Retry
def create_session_with_retry(
retries: int = 3,
backoff_factor: float = 0.5,
timeout: tuple = (5, 30),
) -> requests.Session:
session = requests.Session()
retry = Retry(
total=retries,
read=retries,
connect=retries,
backoff_factor=backoff_factor,
status_forcelist=[429, 500, 502, 503, 504], # Retry on these HTTP codes
allowed_methods=['GET', 'POST'],
)
adapter = HTTPAdapter(max_retries=retry)
session.mount('http://', adapter)
session.mount('https://', adapter)
# Patch for default timeout
original_send = session.send
def send_with_timeout(*args, **kwargs):
kwargs.setdefault('timeout', timeout)
return original_send(*args, **kwargs)
session.send = send_with_timeout
return session
# Usage
session = create_session_with_retry()
response = session.get('https://api.example.com/data')Warning:
Retryfromurllib3retries on connection failures, but not onReadTimeoutby default. Addraise_on_status=Falseand handleReadTimeoutmanually if you need to retry on timeouts:retry = Retry(total=3, read=3, connect=3, backoff_factor=0.5) # This retries connect failures (3 times) and read failures (3 times separately)
Fix 6: Fix DNS Hanging (Timeout Doesn’t Help)
DNS resolution in requests uses Python’s standard socket.getaddrinfo() which is not covered by the requests timeout parameter. A DNS resolution that hangs will block indefinitely regardless of your timeout setting:
import socket
import requests
# Check if DNS resolution is the problem
try:
ip = socket.getaddrinfo('api.example.com', 443, timeout=5)
print(f"DNS resolved to: {ip}")
except socket.gaierror as e:
print(f"DNS lookup failed: {e}")Workaround — use dnspython with a timeout:
import dns.resolver
import requests
def resolve_with_timeout(hostname: str, timeout: float = 5.0) -> str:
resolver = dns.resolver.Resolver()
resolver.lifetime = timeout
answers = resolver.resolve(hostname, 'A')
return str(answers[0])
# Or use httpx which handles DNS timeouts better
import httpx
async def fetch():
async with httpx.AsyncClient(timeout=httpx.Timeout(5.0)) as client:
response = await client.get('https://api.example.com/data')
return response.json()Consider switching to httpx for async or better timeout control:
import httpx
# Synchronous with full timeout control
with httpx.Client(timeout=httpx.Timeout(5.0, connect=3.0, read=30.0)) as client:
response = client.get('https://api.example.com/data')
return response.json()
# Async
async with httpx.AsyncClient(timeout=5.0) as client:
response = await client.get('https://api.example.com/data')Still Not Working?
Confirm the hang is in requests and not your code. Add debug logging:
import logging
import requests
logging.basicConfig(level=logging.DEBUG)
# This shows every step: DNS resolution, TCP connect, TLS handshake, headers, body
response = requests.get('https://api.example.com/data', timeout=10)Test with curl to isolate the issue:
# If curl also hangs, the problem is server-side or network-level
curl -v --max-time 10 https://api.example.com/data
# Check DNS resolution time
time curl -o /dev/null -s -w "%{time_namelookup}\n" https://api.example.com/dataCheck for a proxy that is blocking or hanging the connection:
# Disable proxies explicitly
response = requests.get(url, timeout=10, proxies={'http': None, 'https': None})
# Or check what proxies are configured
import urllib.request
print(urllib.request.getproxies())For related Python networking issues, see Fix: Python ConnectionError Max Retries Exceeded and Fix: Python SSL Certificate Verify Failed.
Solo developer based in Japan. Every solution is cross-referenced with official documentation and tested before publishing.
Was this article helpful?
Related Articles
Fix: Flask Route Returns 404 Not Found
How to fix Flask routes returning 404 — trailing slash redirect, Blueprint prefix issues, route not registered, debug mode, and common URL rule mistakes.
Fix: Python dataclass Mutable Default Value Error (ValueError / TypeError)
How to fix Python dataclass mutable default errors — why lists, dicts, and sets cannot be default field values, how to use field(default_factory=...), and common dataclass pitfalls with inheritance and ClassVar.
Fix: Python SSL: CERTIFICATE_VERIFY_FAILED
How to fix Python SSL CERTIFICATE_VERIFY_FAILED error caused by missing root certificates on macOS, expired system certs, corporate proxies, and self-signed certificates in requests, urllib, and httpx.
Fix: AWS ECS Task Failed to Start
How to fix ECS tasks that fail to start — port binding errors, missing IAM permissions, Secrets Manager access, essential container exit codes, and health check failures.