Fix: Locust Not Working — User Class Errors, Distributed Mode, and Throughput Issues
Quick Answer
How to fix Locust errors — no locustfile found, User class not detected, worker connection refused, distributed mode throughput lower than single-node, StopUser exception, FastHttpUser vs HttpUser, and headless CSV reports.
The Error
You start Locust and it can’t find your test file:
$ locust
[2025-04-09 10:23:45,123] localhost/ERROR/locust.main: Could not find any locustfile!
Ensure file ends in '_test.py', '_locust.py', 'locustfile.py' or is named locustfile.py.Or your test starts but no users actually make requests:
Users: 100 RPS: 0.0 Response time: -Or workers can’t connect to the master in distributed mode:
[2025-04-09 10:24:12,456] worker-1/WARNING/locust.runners:
Failed to connect to master (tcp://localhost:5557)Or distributed mode’s total throughput is lower than a single-node run:
Single node: 5000 RPS
Master + 4 workers: 3200 RPS # Why?Or a StopUser exception crashes the test instead of gracefully stopping one user:
RuntimeError: Cannot call from outside of a User.Locust is a Python load-testing framework where you write users as Python classes that define user behavior. The core ideas are simple but the execution model (gevent-based coroutines, distributed mode via ZeroMQ, stats aggregation) produces specific failure modes that catch newcomers. This guide covers them.
Why This Happens
Locust runs each simulated user as a gevent greenlet, not an OS thread. Greenlets are lightweight but cooperative — blocking I/O outside of Locust’s own patches stalls the entire worker process. The HttpUser class patches requests to yield to other greenlets; blocking calls to other libraries don’t yield, which tanks throughput.
Distributed mode spawns a master (coordinator) and multiple workers (load generators). If workers can’t reach the master on port 5557, they silently retry and the test never starts generating load. If the master’s own machine becomes a bottleneck (most common: processing stats from many workers), throughput suffers.
Fix 1: Locustfile Structure and Discovery
# locustfile.py
from locust import HttpUser, task, between
class WebsiteUser(HttpUser):
wait_time = between(1, 3) # Random 1-3s between requests
@task
def index_page(self):
self.client.get("/")
@task(3) # 3x more likely than other tasks
def api_call(self):
self.client.get("/api/products")
def on_start(self):
self.client.post("/login", json={"user": "test", "pw": "test"})Run it:
locust # Looks for locustfile.py in CWD
locust -f path/to/mytest.py # Explicit file
locust -f locustfiles/*.py # Multiple files (Locust 2.5+)File naming conventions:
locustfile.py(standard)- Any
*_locust.py - Any
*_test.py
Web UI mode (default):
locust -f locustfile.py
# Open http://localhost:8089 in browser
# Set users, spawn rate, and host, click StartHeadless mode (for CI):
locust -f locustfile.py \
--host https://staging.example.com \
--users 100 \
--spawn-rate 10 \
--run-time 5m \
--headless \
--csv results \
--only-summaryCommon Mistake: Writing a locustfile with just functions and no User class. Locust requires at least one class inheriting from User or HttpUser — otherwise it finds the file but reports “no User class detected” and does nothing. The @task decorator applies to methods of a User class, not standalone functions.
Fix 2: @task Behavior and Weighting
from locust import HttpUser, task, between
class MyUser(HttpUser):
wait_time = between(1, 5)
@task(1) # Weight 1 (least likely)
def rare_task(self):
self.client.get("/rare")
@task(5) # Weight 5 (5x more likely than rare_task)
def common_task(self):
self.client.get("/common")
@task
def default_weight(self): # Default weight: 1
self.client.get("/default")At runtime, Locust picks a task based on weight. With weights 1, 5, 1: rare fires ~14% of the time, common ~71%, default_weight ~14%.
Task sequences — execute in order:
from locust import HttpUser, task, SequentialTaskSet
class UserFlow(SequentialTaskSet):
@task
def step_1_login(self):
self.client.post("/login", json={"user": "x", "pw": "y"})
@task
def step_2_browse(self):
self.client.get("/products")
@task
def step_3_checkout(self):
self.client.post("/checkout", json={"item_id": 42})
class MyUser(HttpUser):
tasks = [UserFlow]TaskSet vs SequentialTaskSet:
TaskSet: random task selection (like regular@task)SequentialTaskSet: always runs tasks in order, loops back
on_start and on_stop:
class MyUser(HttpUser):
def on_start(self):
# Runs once per simulated user when the user starts
self.login()
def on_stop(self):
# Runs when the user is being stopped
self.client.post("/logout")
def login(self):
response = self.client.post("/login", json={"user": "x", "pw": "y"})
self.token = response.json()["token"]
@task
def protected_endpoint(self):
self.client.get("/me", headers={"Authorization": f"Bearer {self.token}"})Fix 3: HttpUser vs FastHttpUser
from locust import HttpUser, FastHttpUser, task
class StandardUser(HttpUser):
# Uses requests library — compatible with most patterns
@task
def index(self):
self.client.get("/")
class FastUser(FastHttpUser):
# Uses geventhttpclient — 3–5x faster, fewer features
@task
def index(self):
self.client.get("/")Performance comparison:
| Feature | HttpUser | FastHttpUser |
|---|---|---|
| Library | requests | geventhttpclient |
| Throughput | Baseline | 3–5x faster |
| HTTP/2 | No | No |
| WebSockets | No (use separate lib) | No |
| Session support | Yes (requests.Session) | Yes |
| Custom hooks | Full requests API | Limited |
Use FastHttpUser when generating >1000 RPS per worker. The requests library’s overhead becomes the bottleneck.
Use HttpUser when you need requests-specific features: retries, complex authentication, file uploads, or middleware.
Pro Tip: Start with HttpUser for familiarity. Switch to FastHttpUser only when you can measure that requests is the bottleneck — which usually means one worker can’t generate enough load. Most tests never hit that limit.
Fix 4: Distributed Mode — Master and Workers
# On master machine
locust -f locustfile.py --master
# On each worker machine (can be same host for testing)
locust -f locustfile.py --worker --master-host=master.example.com
# Or local test with multiple workers on one machine
locust -f locustfile.py --master &
locust -f locustfile.py --worker &
locust -f locustfile.py --worker &Workers must have identical locustfile code — they execute the tasks, while the master coordinates.
Master port — default is 5557, used for worker communication:
locust -f locustfile.py --master --master-bind-port 5557
locust -f locustfile.py --worker --master-host master.example.com --master-port 5557Connection refused to master:
Failed to connect to master (tcp://master.example.com:5557)Fixes:
- Master not started — run it first
- Firewall blocking port 5557 — open on master host
- Wrong
--master-host— must be reachable from worker machines
Common Mistake: Running distributed Locust on one machine to “simulate” high load, and seeing total RPS lower than running without distributed mode. A single machine can’t generate more load as master+workers than as a single node — you’re just adding coordination overhead. Distributed mode only helps when workers run on different machines with their own CPU and network.
When distributed mode is slower than single-node:
- All workers on the master machine: CPU contention
- Network bottleneck on the target (not the load generators)
- Stats aggregation overhead: each worker reports stats every 1-3s to master; 50+ workers hammer the master CPU
Rule of thumb: 1 worker per 2 CPU cores on the load generator machine. Scale by adding machines, not workers on one machine.
Fix 5: Throughput Tuning
Increase users and decrease wait time:
from locust import HttpUser, task, between, constant
class HighThroughputUser(HttpUser):
wait_time = constant(0) # No wait — maximum load per user
@task
def request(self):
self.client.get("/endpoint")Or use between(0.1, 0.5) for some randomness.
constant_pacing — target a specific RPS per user:
from locust import HttpUser, task, constant_pacing
class PacedUser(HttpUser):
wait_time = constant_pacing(1) # 1 request per second per user
@task
def request(self):
self.client.get("/endpoint")With 100 users, this generates ~100 RPS total regardless of endpoint response time (the library pads or skips waits to hit the target).
Increase worker count with --processes (Locust 2.16+):
locust -f locustfile.py --processes 4 --headless --users 1000 --spawn-rate 50This runs 4 worker processes on one machine — each uses a full CPU core.
Check worker CPU usage during a run. If a single worker hits 100% CPU, you’re bottlenecked — add more workers or machines.
Fix 6: Custom Metrics and Response Validation
from locust import HttpUser, task
from locust.exception import RescheduleTask
class ValidatingUser(HttpUser):
@task
def api_call(self):
response = self.client.get("/api/data", name="/api/data")
# Fail the request if status or body is wrong
if response.status_code != 200:
response.failure(f"Expected 200, got {response.status_code}")
elif "error" in response.text:
response.failure("Response contains 'error'")name= groups URLs with variable paths:
user_id = random.randint(1, 1000)
# Each URL looks different; group them in stats as /users/:id
self.client.get(f"/users/{user_id}", name="/users/:id")Custom events and metrics:
from locust import events
import time
@events.request.add_listener
def on_request(request_type, name, response_time, response_length, exception, **kwargs):
# Called for every request — log to external system
if exception:
print(f"Request failed: {name}, {exception}")Record custom non-HTTP metrics (DB latency, queue depth):
from locust import events
import time
class MyUser(User):
@task
def custom_check(self):
start = time.perf_counter()
try:
db_latency = measure_db_query()
events.request.fire(
request_type="DB",
name="select_users",
response_time=db_latency * 1000,
response_length=0,
exception=None,
)
except Exception as e:
events.request.fire(
request_type="DB",
name="select_users",
response_time=0,
response_length=0,
exception=e,
)Fix 7: Parameterized and Data-Driven Tests
CSV test data:
import csv
from locust import HttpUser, task
class LoginUser(HttpUser):
def on_start(self):
with open("users.csv") as f:
self.users = list(csv.DictReader(f))
@task
def login(self):
user = random.choice(self.users)
self.client.post("/login", json={
"email": user["email"],
"password": user["password"],
})Unique data per simulated user (each user uses a different account):
from locust import HttpUser, task
from queue import Queue
user_queue = Queue()
with open("users.csv") as f:
for row in csv.DictReader(f):
user_queue.put(row)
class SingleUserPerAccount(HttpUser):
def on_start(self):
try:
self.creds = user_queue.get_nowait()
except Empty:
raise StopUser() # Stop this user if no accounts left
@task
def login(self):
self.client.post("/login", json=self.creds)For distributed mode, share data via an external store (Redis, SQLite) rather than a shared Queue — workers are separate processes.
Fix 8: CI Integration and Reporting
# Run headless with CSV output
locust -f locustfile.py \
--host https://staging.example.com \
--users 500 \
--spawn-rate 50 \
--run-time 10m \
--headless \
--csv results \
--html report.html \
--exit-code-on-error 1Output files:
results_stats.csv— aggregate stats per endpointresults_stats_history.csv— time-series dataresults_failures.csv— failure detailsresults_exceptions.csv— Python exceptions raised in tasksreport.html— standalone interactive report
Fail CI on threshold breach:
locust -f locustfile.py \
--host ... \
--users 100 \
--run-time 5m \
--headless \
--csv results \
--check-rps 100 \ # Fail if RPS drops below 100
--check-fail-ratio 0.01 \ # Fail if error rate > 1%
--check-avg-response-time 500 \ # Fail if avg response > 500ms
--exit-code-on-error 2GitHub Actions example:
# .github/workflows/load-test.yml
name: Load Test
on:
pull_request:
workflow_dispatch:
jobs:
load-test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
with:
python-version: "3.12"
- run: pip install locust
- run: |
locust -f locustfile.py \
--host https://staging.example.com \
--users 50 --spawn-rate 10 --run-time 3m \
--headless --csv results
- uses: actions/upload-artifact@v4
with:
name: load-test-results
path: results*.csvStill Not Working?
Locust vs k6 vs JMeter
- Locust — Python-based, flexible, best for teams that already write Python.
- k6 — JavaScript-based, extremely fast (Go runtime), best for JavaScript-heavy teams.
- JMeter — GUI-focused, Java-based, huge plugin ecosystem, best for complex enterprise scenarios.
Choose Locust when you want to test APIs with complex Python logic (auth tokens, custom protocols). Choose k6 when raw throughput and a scripted JS approach matter more.
Debugging Low Throughput
- Check worker CPU — if < 100%, you have headroom. Increase users.
- Check target CPU/network — bottleneck may be the system under test, not Locust.
- Switch to
FastHttpUserifHttpUseris saturating. - Add machines — single-machine distributed mode adds overhead without benefit.
Integration with Prometheus/Grafana
from locust import events
from prometheus_client import start_http_server, Counter, Histogram
REQUESTS = Counter("locust_requests", "Total requests", ["method", "name", "status"])
LATENCY = Histogram("locust_latency_ms", "Request latency", ["name"])
@events.init.add_listener
def on_init(environment, **kwargs):
if not isinstance(environment.runner, environment.runner_class):
start_http_server(9090)
@events.request.add_listener
def on_request(request_type, name, response_time, exception, **kwargs):
status = "ok" if exception is None else "fail"
REQUESTS.labels(method=request_type, name=name, status=status).inc()
LATENCY.labels(name=name).observe(response_time)Testing WebSocket Endpoints
Locust doesn’t ship with native WebSocket support. Use websockets library:
from locust import User, task
import gevent
import websockets
import asyncio
class WebSocketUser(User):
@task
def websocket_interaction(self):
asyncio.run(self._ws_flow())
async def _ws_flow(self):
async with websockets.connect("wss://example.com/ws") as ws:
await ws.send("hello")
msg = await ws.recv()For general Python asyncio patterns that intersect with Locust’s gevent runtime, see Python asyncio not running.
Docker Deployment for Distributed Load
FROM python:3.12-slim
RUN pip install locust
WORKDIR /app
COPY locustfile.py .
# Run as master or worker via command args# docker-compose.yml
services:
master:
build: .
command: locust -f locustfile.py --master
ports: ["8089:8089", "5557:5557"]
worker:
build: .
command: locust -f locustfile.py --worker --master-host=master
depends_on: [master]
deploy:
replicas: 4docker compose up --scale worker=8For Docker Compose dependency and healthcheck patterns that matter in distributed Locust setups, see docker-compose depends_on not working.
Testing Django or Flask Apps
Locust runs as a separate process — point it at your app’s HTTP endpoint:
# Terminal 1: start app
python manage.py runserver 0.0.0.0:8000
# Terminal 2: load test
locust -f locustfile.py --host http://localhost:8000For Django-specific load patterns around database connections and session management, see Django migration conflict for the migration testing surface. For pytest fixture patterns that complement load testing, see pytest fixture not found.
Solo developer based in Japan. Every solution is cross-referenced with official documentation and tested before publishing.
Was this article helpful?
Related Articles
Fix: Hypothesis Not Working — Strategy Errors, Flaky Tests, and Shrinking Issues
How to fix Hypothesis errors — Unsatisfied assumption, Flaky test detected, HealthCheck data_too_large, strategy composition failing, example database stale, settings profile not found, and stateful testing errors.
Fix: Selenium Not Working — WebDriver Errors, Element Not Found, and Timeout Issues
How to fix Selenium errors — WebDriverException session not created, NoSuchElementException element not found, StaleElementReferenceException, TimeoutException waiting for element, headless Chrome crashes, and driver version mismatch.
Fix: Tox Not Working — Environment Creation, Config Errors, and Multi-Python Testing
How to fix Tox errors — ERROR cannot find Python interpreter, tox.ini config parsing error, allowlist_externals required, recreating environments slow, pyproject.toml integration, and matrix env selection.
Fix: Python asyncio Blocking the Event Loop — Mixing Sync and Async Code
How to fix Python asyncio event loop blocking — using run_in_executor for sync calls, asyncio.to_thread, avoiding blocking I/O in coroutines, and detecting event loop stalls.