Fix: httpx Not Working — Async Client, Timeout, and Connection Pool Errors
Quick Answer
How to fix httpx errors — RuntimeError event loop is closed, ReadTimeout exception, ConnectionResetError, async client not closing properly, HTTP/2 not enabled, SSL verify failed, and proxy not working.
The Error
You await an async httpx call and get a runtime error:
RuntimeError: Event loop is closed
RuntimeError: There is no current event loop in threadOr requests time out under load:
httpx.ReadTimeout: The read operation timed out
httpx.ConnectError: All connection attempts failedOr the client warns about resource leaks:
ResourceWarning: unclosed transport <_SelectorSocketTransport ...>Or you try to enable HTTP/2 and get an import error:
ImportError: Using http2=True, but the 'h2' package is not installed.
Make sure to install httpx using `pip install httpx[http2]`.httpx is the modern Python HTTP client — supports both sync and async, HTTP/1.1 and HTTP/2, with a requests-compatible API. The async support and connection pooling are powerful but easy to misuse: forgetting to close clients, mixing sync and async patterns, or letting timeouts default to None.
Why This Happens
httpx’s design separates the request from the client — httpx.get() creates a one-shot client per call, while httpx.Client() and httpx.AsyncClient() reuse connections across multiple requests. The connection pool stays open until you close the client. Forgetting to close clients leaks file descriptors and TCP connections.
Async usage requires being inside an event loop. Calling await outside an async function or inside a closed loop raises confusing runtime errors. The default timeout is 5 seconds — short for slow APIs and easy to hit unexpectedly.
Fix 1: Sync vs Async — Pick One Pattern
httpx supports both. Mixing them causes the most common errors.
import httpx
import asyncio
# SYNC — use httpx.Client or httpx.get()
with httpx.Client() as client:
response = client.get("https://api.example.com/data")
print(response.json())
# Or for simple one-off requests
response = httpx.get("https://api.example.com/data")
# ASYNC — use httpx.AsyncClient inside an async function
async def fetch():
async with httpx.AsyncClient() as client:
response = await client.get("https://api.example.com/data")
return response.json()
result = asyncio.run(fetch())WRONG — calling async client outside an event loop:
import httpx
# This doesn't work — AsyncClient methods return coroutines
client = httpx.AsyncClient()
response = client.get("https://api.example.com") # Returns coroutine, not response
print(response.json()) # AttributeError: coroutine has no attribute 'json'CORRECT — use sync client for sync code:
import httpx
with httpx.Client() as client:
response = client.get("https://api.example.com")
print(response.json())Common Mistake: Creating an AsyncClient thinking you’ll get speed for free. Async only helps when you’re making multiple concurrent requests OR when you’re inside an existing async framework (FastAPI, asyncio). For sequential requests in a sync script, Client is simpler and just as fast.
Fix 2: Always Close Clients — Connection Leaks
ResourceWarning: unclosed transport
ResourceWarning: unclosed <httpx.AsyncClient object>Clients hold open connections in their pool. Forgetting to close them leaks resources.
WRONG — client never closed:
client = httpx.Client()
response = client.get("https://api.example.com")
# Script ends — connection leakedCORRECT — context manager closes automatically:
with httpx.Client() as client:
response = client.get("https://api.example.com")
# Client closed when leaving the with-blockFor long-lived clients (e.g., singleton in a web app), close explicitly on shutdown:
import httpx
import atexit
client = httpx.Client(timeout=30.0)
def cleanup():
client.close()
atexit.register(cleanup)Async cleanup pattern:
import httpx
import asyncio
async def main():
async with httpx.AsyncClient() as client:
# All requests inside this block share the connection pool
responses = await asyncio.gather(
client.get("https://api.example.com/a"),
client.get("https://api.example.com/b"),
client.get("https://api.example.com/c"),
)
return [r.json() for r in responses]
results = asyncio.run(main())For asyncio event loop and gather patterns, see Python asyncio gather error.
Fix 3: Timeout Configuration
httpx.ReadTimeout: The read operation timed out
httpx.ConnectTimeout: The connection attempt timed out
httpx.WriteTimeout: The write operation timed out
httpx.PoolTimeout: Pool timeouthttpx’s default timeout is 5 seconds. APIs slower than that (file uploads, long-running queries) hit this default constantly.
Set timeout per request:
import httpx
with httpx.Client() as client:
# Single timeout value — applies to all phases
response = client.get("https://slow-api.com/data", timeout=30.0)
# Or fine-grained timeout control
timeout = httpx.Timeout(
connect=5.0, # Time to establish connection
read=30.0, # Time to receive response after request sent
write=10.0, # Time to send request
pool=5.0, # Time to acquire connection from pool
)
response = client.get("https://api.com", timeout=timeout)Set default timeout on the client:
client = httpx.Client(timeout=30.0) # All requests use 30sDisable timeout entirely (only for trusted, controlled APIs):
response = httpx.get("https://api.example.com", timeout=None)Pro Tip: Always set an explicit timeout. The default 5 seconds is a footgun — it works for fast APIs in development but causes mysterious failures under load when API response times spike. Setting timeout=30.0 on the client matches the behavior most developers expect from requests.
Fix 4: Retries and Transport Errors
httpx.ConnectError: [Errno 111] Connection refused
httpx.RemoteProtocolError: Server disconnected without sending a responsehttpx doesn’t retry by default. Network errors propagate immediately. For resilient code, configure retries via the transport:
import httpx
transport = httpx.HTTPTransport(retries=3) # Sync
async_transport = httpx.AsyncHTTPTransport(retries=3) # Async
client = httpx.Client(transport=transport)
async_client = httpx.AsyncClient(transport=async_transport)
response = client.get("https://flaky-api.com")retries only retries connection-level errors (DNS failures, connection refused) — not HTTP-level errors like 500. For HTTP retries, use a wrapper:
import httpx
import time
from typing import Callable
def retry_request(fn: Callable, max_retries: int = 3, backoff: float = 1.0):
for attempt in range(max_retries):
try:
response = fn()
if response.status_code < 500:
return response # Success or client error — don't retry
# 5xx — retry
except (httpx.ConnectError, httpx.TimeoutException) as e:
if attempt == max_retries - 1:
raise
time.sleep(backoff * (2 ** attempt)) # Exponential backoff
with httpx.Client(timeout=30.0) as client:
response = retry_request(lambda: client.get("https://api.example.com"))For more comprehensive retry logic, use the tenacity library or the httpx-retries plugin.
Fix 5: HTTP/2 Support
ImportError: Using http2=True, but the 'h2' package is not installed.httpx supports HTTP/2 but the dependency isn’t included by default:
pip install httpx[http2]
# Or
pip install h2import httpx
with httpx.Client(http2=True) as client:
response = client.get("https://www.example.com")
print(response.http_version) # 'HTTP/2' if the server supports itHTTP/2 benefits:
- Multiplexing — multiple requests over a single connection (huge for many concurrent requests)
- Header compression
- Server push (rarely used in practice)
HTTP/2 caveats:
- Only works with HTTPS (HTTP/2 over plaintext is rarely deployed)
- The server must support HTTP/2 —
response.http_versionshows the negotiated version - For single sequential requests, HTTP/2 isn’t faster than HTTP/1.1 — its benefit is concurrent request multiplexing
import httpx
import asyncio
async def fetch_many():
async with httpx.AsyncClient(http2=True) as client:
# All 100 requests share the same TCP connection (HTTP/2 multiplexing)
responses = await asyncio.gather(*[
client.get(f"https://api.example.com/items/{i}")
for i in range(100)
])
return responsesFix 6: SSL Certificate Verification
httpx.ConnectError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed
ssl.SSLError: [SSL: CERTIFICATE_VERIFY_FAILED]The server’s SSL certificate isn’t trusted. Common with self-signed certs, expired certs, or corporate networks with MITM proxies.
Disable verification (NOT recommended for production):
import httpx
# Disable for one request
response = httpx.get("https://self-signed.example.com", verify=False)
# Disable for all requests on a client
client = httpx.Client(verify=False)Use a custom CA bundle (corporate networks):
client = httpx.Client(verify="/path/to/corporate-ca-bundle.pem")Set the bundle via environment variable:
export SSL_CERT_FILE=/path/to/ca-bundle.pem
# httpx and requests both honor thisUpdate your system certificates (Linux):
# Debian/Ubuntu
sudo apt install ca-certificates
sudo update-ca-certificates
# macOS — install certifi's bundle
pip install --upgrade certifiFor Python requests library SSL errors with similar fixes, see Python SSL certificate verify failed.
Fix 7: Proxies and Authentication
import httpx
# HTTP proxy
client = httpx.Client(proxy="http://proxy.example.com:8080")
# HTTPS proxy with auth
client = httpx.Client(proxy="http://user:[email protected]:8080")
# Different proxy per scheme
client = httpx.Client(
mounts={
"http://": httpx.HTTPTransport(proxy="http://proxy:8080"),
"https://": httpx.HTTPTransport(proxy="https://proxy:8443"),
}
)
# SOCKS5 proxy
# pip install httpx[socks]
client = httpx.Client(proxy="socks5://localhost:1080")Authentication:
import httpx
# Basic auth
client = httpx.Client(auth=("username", "password"))
response = client.get("https://api.example.com")
# Or per-request
response = httpx.get("https://api.example.com", auth=("user", "pass"))
# Bearer token
response = httpx.get(
"https://api.example.com",
headers={"Authorization": "Bearer YOUR_TOKEN"},
)
# Custom auth class — for OAuth, signed requests, etc.
class TokenAuth(httpx.Auth):
def __init__(self, token):
self.token = token
def auth_flow(self, request):
request.headers["Authorization"] = f"Bearer {self.token}"
yield request
client = httpx.Client(auth=TokenAuth("my-token-123"))Fix 8: Streaming Responses for Large Files
Loading a 5GB response into memory crashes the process. Stream it instead:
import httpx
# WRONG — loads entire response into memory
response = httpx.get("https://example.com/huge-file.zip")
with open("file.zip", "wb") as f:
f.write(response.content) # OOM for large files
# CORRECT — stream the response
with httpx.stream("GET", "https://example.com/huge-file.zip") as response:
response.raise_for_status()
with open("file.zip", "wb") as f:
for chunk in response.iter_bytes(chunk_size=8192):
f.write(chunk)Streaming with progress:
import httpx
from tqdm import tqdm
with httpx.stream("GET", "https://example.com/large-file.tar.gz") as response:
response.raise_for_status()
total = int(response.headers.get("content-length", 0))
with open("file.tar.gz", "wb") as f, tqdm(total=total, unit='B', unit_scale=True) as pbar:
for chunk in response.iter_bytes(chunk_size=8192):
f.write(chunk)
pbar.update(len(chunk))Async streaming:
import httpx
async def download_async(url, output):
async with httpx.AsyncClient() as client:
async with client.stream("GET", url) as response:
response.raise_for_status()
with open(output, "wb") as f:
async for chunk in response.aiter_bytes(chunk_size=8192):
f.write(chunk)Streaming JSON line-by-line (for ndjson or LLM streaming responses):
import httpx
import json
with httpx.stream("GET", "https://api.example.com/events") as response:
for line in response.iter_lines():
if line:
data = json.loads(line)
print(data)Still Not Working?
httpx vs requests — When to Switch
requests— synchronous, mature, simple. Use for scripts, sync code, when you don’t need async.httpx— sync OR async, HTTP/2 support, modern API. Use when you need async, when integrating with async frameworks (FastAPI), or when HTTP/2 multiplexing matters.
For requests-specific timeout and connection patterns, see Python requests timeout.
Mocking httpx in Tests
pip install pytest-httpximport pytest
from pytest_httpx import HTTPXMock
import httpx
def test_api_call(httpx_mock: HTTPXMock):
httpx_mock.add_response(
url="https://api.example.com/data",
json={"result": "success"},
)
with httpx.Client() as client:
response = client.get("https://api.example.com/data")
assert response.json() == {"result": "success"}Working with FastAPI and TestClient
FastAPI’s TestClient is built on httpx. If your tests fail with async issues:
from fastapi.testclient import TestClient
from myapp import app
# Sync TestClient — uses httpx.Client internally
client = TestClient(app)
def test_endpoint():
response = client.get("/api/items")
assert response.status_code == 200For async FastAPI tests, use AsyncClient directly:
import pytest
import httpx
from myapp import app
@pytest.mark.asyncio
async def test_async_endpoint():
async with httpx.AsyncClient(app=app, base_url="http://test") as client:
response = await client.get("/api/items")
assert response.status_code == 200For FastAPI dependency injection patterns that often surface in HTTP client tests, see FastAPI dependency injection error.
Connection Pool Tuning
For high-throughput services making many concurrent requests:
import httpx
limits = httpx.Limits(
max_keepalive_connections=20, # Pool size for keep-alive
max_connections=100, # Total max connections
keepalive_expiry=30.0, # Seconds before idle conn is closed
)
client = httpx.Client(limits=limits, timeout=30.0)The defaults (max_connections=100, max_keepalive_connections=20) work for most workloads. Increase them only if you see httpx.PoolTimeout errors under load.
Cookies and Session Persistence
Like requests.Session, httpx.Client persists cookies across requests automatically:
import httpx
with httpx.Client() as client:
# First request — server sets cookies
client.post("https://example.com/login", data={"user": "x", "pw": "y"})
# Subsequent requests — client sends back cookies automatically
response = client.get("https://example.com/profile")
# Inspect cookies
print(client.cookies.jar)For one-off requests where you need to send specific cookies:
response = httpx.get(
"https://example.com",
cookies={"session_id": "abc123"},
)File Uploads
import httpx
# Single file
with open("photo.jpg", "rb") as f:
response = httpx.post(
"https://api.example.com/upload",
files={"file": f},
)
# Multiple files
files = {
"image": ("photo.jpg", open("photo.jpg", "rb"), "image/jpeg"),
"doc": ("readme.txt", open("readme.txt", "rb"), "text/plain"),
}
response = httpx.post("https://api.example.com/upload", files=files)
# Multipart with form fields alongside files
response = httpx.post(
"https://api.example.com/upload",
files={"file": open("data.csv", "rb")},
data={"description": "March data", "category": "sales"},
)Inspecting and Modifying Requests
httpx provides hooks to inspect or modify requests and responses globally:
import httpx
def log_request(request):
print(f">> {request.method} {request.url}")
def log_response(response):
request = response.request
print(f"<< {response.status_code} {request.url}")
client = httpx.Client(
event_hooks={
"request": [log_request],
"response": [log_response],
}
)
response = client.get("https://example.com")
# >> GET https://example.com
# << 200 https://example.comThis is useful for debugging in development or building structured logging in production.
Solo developer based in Japan. Every solution is cross-referenced with official documentation and tested before publishing.
Was this article helpful?
Related Articles
Fix: FastAPI BackgroundTasks Not Working — Task Not Running or Dependency Errors
How to fix FastAPI BackgroundTasks — task not executing, dependency injection in tasks, error handling, Celery for heavy tasks, and lifespan-managed background workers.
Fix: Pydantic ValidationError — Field Required, Value Not a Valid Type, or Extra Fields
How to fix Pydantic v2 validation errors — required fields, type coercion, model_validator, custom validators, extra fields config, and migrating from Pydantic v1.
Fix: FastAPI Dependency Injection Errors — Dependencies Not Working
How to fix FastAPI dependency injection errors — async dependencies, database sessions, sub-dependencies, dependency overrides in tests, and common DI mistakes.
Fix: Python asyncio Blocking the Event Loop — Mixing Sync and Async Code
How to fix Python asyncio event loop blocking — using run_in_executor for sync calls, asyncio.to_thread, avoiding blocking I/O in coroutines, and detecting event loop stalls.