Fix: Loguru Not Working — Missing Logs, Rotation Errors, and Multiprocessing Issues
Quick Answer
How to fix Loguru errors — logs not appearing after logger.add, file rotation not working, enqueue required for multiprocessing, structured logging JSON, intercepting stdlib logging, and handler removal.
The Error
You configure Loguru and the logs don’t appear where you expect:
from loguru import logger
logger.add("app.log", level="INFO")
logger.debug("Debug message")
# Shows up in stderr (default handler) but not app.log — why?Or file rotation silently stops working:
logger.add("app_{time}.log", rotation="100 MB")
# Logs grow past 100 MB without rotatingOr multiprocessing causes interleaved or lost log messages:
[Worker 1] Lproceess... [Worker 2] Lproceess... [Wo[Worker 2]
# Log lines get garbledOr intercepted stdlib logging stops flowing:
import logging
from loguru import logger
logger.remove()
logger.add(sys.stderr, level="INFO")
logging.getLogger("requests").info("HTTP call")
# Nothing appears — requests logs to stdlib, not loguruOr structured logs come out as plain text instead of JSON:
logger.info("user_created", user_id=123)
# Output: 2025-04-09 10:00:00 | INFO | __main__:<module>:5 - user_created
# But you wanted JSONLoguru solves the complexity of Python’s stdlib logging — no handlers, formatters, filters hierarchy to learn. One global logger object, simple add() for new destinations. But that simplicity has its own pitfalls around the default handler, intercepting stdlib logging, and multiprocess safety. This guide covers each.
Why This Happens
Loguru ships with a default handler on stderr at DEBUG level. When you call logger.add(...), you add a handler; the default one still runs too. If you only want file output, you have to logger.remove() first.
Intercepting stdlib logging requires explicit setup — other libraries (requests, urllib3, SQLAlchemy) use stdlib logging, and their messages don’t reach Loguru without a bridge.
Fix 1: Adding and Removing Handlers
from loguru import logger
import sys
# Default: logger writes DEBUG+ to stderr with color
# Add a file handler
logger.add("app.log", level="INFO", rotation="100 MB")
# Now logs go to BOTH stderr (default) AND app.log
logger.info("hello")
# stderr: colored output
# app.log: plain textRemove the default handler:
logger.remove() # Removes ALL handlers
# Add only what you want
logger.add(sys.stderr, level="INFO", format="{time} | {level} | {message}")
logger.add("app.log", level="DEBUG", rotation="1 day")logger.remove() with no args removes everything. Pass a handler ID to remove just one:
handler_id = logger.add("temp.log")
# ... later
logger.remove(handler_id) # Only removes this oneCommon Mistake: Calling logger.add("app.log") and expecting logs to ONLY go to the file. The default stderr handler stays active unless you remove() it first. Production apps usually remove the default and add explicit handlers with controlled formatting.
Fix 2: File Rotation Options
from loguru import logger
# Size-based rotation
logger.add("app_{time}.log", rotation="100 MB") # New file at 100 MB
# Time-based rotation
logger.add("app_{time}.log", rotation="1 day") # Rotate daily
logger.add("app_{time}.log", rotation="monday") # Rotate every Monday
logger.add("app_{time}.log", rotation="00:00") # Rotate at midnight
logger.add("app_{time}.log", rotation="12:00") # Rotate at noon
# Explicit function (custom rules)
def custom_rotation(message, file):
return file.tell() > 100 * 1024 * 1024 # > 100 MB
logger.add("app_{time}.log", rotation=custom_rotation)Retention — delete old files after rotation:
logger.add(
"app_{time}.log",
rotation="1 day",
retention="10 days", # Keep 10 days, delete older
compression="zip", # Compress rotated files
)Retention options:
| Value | Meaning |
|---|---|
"10 days" | Delete files older than 10 days |
"1 week" | Same, 1 week |
10 (int) | Keep 10 most recent files |
datetime.timedelta(days=7) | Same as “7 days” |
Common Mistake: Using rotation="1 day" without {time} in the filename. Rotation appends a timestamp to the previous file — but if the filename doesn’t have {time}, multiple rotations overwrite each other. Always include {time}:
# WRONG — rotation creates "app.log.2025-04-09" but "app.log" may overwrite
logger.add("app.log", rotation="1 day")
# CORRECT — each period gets its own file
logger.add("app_{time:YYYY-MM-DD}.log", rotation="1 day")Fix 3: Multiprocessing with enqueue=True
from loguru import logger
from multiprocessing import Process
def worker():
logger.info("from worker")
# WRONG — concurrent writes to stderr/file from multiple processes = corruption
for _ in range(4):
Process(target=worker).start()Fix — add enqueue=True:
logger.remove()
logger.add(sys.stderr, enqueue=True)
logger.add("app.log", enqueue=True, rotation="100 MB")
from multiprocessing import Process
def worker():
logger.info("from worker")
if __name__ == "__main__":
for _ in range(4):
Process(target=worker).start()enqueue=True serializes log records through a multiprocessing queue — only one process writes to the handler at a time. Essential for:
- Multi-worker servers (Gunicorn, Uvicorn workers)
- Ray workers
- Celery workers
- Any app using
multiprocessing.Process
Pro Tip: Set enqueue=True by default on all handlers in production apps. The overhead is negligible, and it protects against the subtle log corruption that appears only under load. The small latency cost is worth preventing garbled logs during incidents.
enqueue=True has a gotcha — Loguru can’t serialize non-picklable objects in log messages. If you log a DB connection or a socket, you’ll get a pickling error:
logger.info("DB state: {conn}", conn=db_connection) # Can't pickleStringify first:
logger.info("DB state: {conn}", conn=str(db_connection))Fix 4: Structured Logging and JSON Output
from loguru import logger
# WRONG — just passes extra as format kwargs, doesn't make JSON
logger.info("user_created", user_id=123)
# Output: user_created
# (Loguru sees "user_id=123" as a format parameter, not structured data)Correct structured logging with bind():
logger.info("user_created", user_id=123, email="[email protected]")
# With default format, still plain text
# To get JSON output, set serialize=True
logger.remove()
logger.add(sys.stdout, serialize=True)
logger.bind(user_id=123, email="[email protected]").info("user_created")
# Output (one line, pretty-printed here):
# {
# "text": "user_created",
# "record": {
# "level": {"name": "INFO"},
# "message": "user_created",
# "extra": {"user_id": 123, "email": "[email protected]"},
# ...
# }
# }Custom JSON format for controlled structure:
import json
from loguru import logger
import sys
def json_sink(message):
record = message.record
output = {
"time": record["time"].isoformat(),
"level": record["level"].name,
"message": record["message"],
"module": record["module"],
"function": record["function"],
"line": record["line"],
**record["extra"], # All bind()ed extras
}
print(json.dumps(output), file=sys.stderr)
logger.remove()
logger.add(json_sink, level="INFO")
logger.bind(user_id=123).info("user_created")
# {"time": "2025-04-09T10:00:00+00:00", "level": "INFO", "message": "user_created", "user_id": 123, ...}Context binding across calls:
# Bind user_id for all subsequent logs in this context
context_logger = logger.bind(user_id=123, request_id="abc")
context_logger.info("start")
context_logger.info("middle")
context_logger.info("end")
# All three have user_id=123 and request_id="abc" in extrasThread-local context with contextualize():
with logger.contextualize(request_id="abc123"):
logger.info("inside context") # Has request_id
do_work() # Any logger.info() inside do_work() also has request_id
logger.info("outside") # No request_idPerfect for request correlation in web apps — add request_id at the middleware level, every log during that request has it.
Fix 5: Intercepting stdlib Logging
Third-party libraries use logging from stdlib — their messages don’t reach Loguru by default.
import logging
from loguru import logger
import sys
class InterceptHandler(logging.Handler):
def emit(self, record):
try:
level = logger.level(record.levelname).name
except ValueError:
level = record.levelno
frame, depth = logging.currentframe(), 2
while frame.f_code.co_filename == logging.__file__:
frame = frame.f_back
depth += 1
logger.opt(depth=depth, exception=record.exc_info).log(
level, record.getMessage()
)
# Replace all stdlib handlers with the interceptor
logging.basicConfig(handlers=[InterceptHandler()], level=0, force=True)
# Now any library using logging goes through Loguru
import requests
requests.get("https://example.com") # Its logging.DEBUG messages flow to LoguruCatch only specific stdlib loggers:
for name in ["uvicorn", "sqlalchemy.engine", "requests"]:
logging.getLogger(name).handlers = [InterceptHandler()]
logging.getLogger(name).propagate = FalseCommon Mistake: Installing the InterceptHandler but forgetting force=True. Python’s logging.basicConfig is idempotent — if any handler is already configured (and most frameworks configure one), your interceptor is ignored. force=True clears existing handlers first.
Fix 6: Format Strings and Level Customization
from loguru import logger
import sys
logger.remove()
logger.add(
sys.stderr,
format=(
"<green>{time:YYYY-MM-DD HH:mm:ss.SSS}</green> | "
"<level>{level: <8}</level> | "
"<cyan>{name}</cyan>:<cyan>{function}</cyan>:<cyan>{line}</cyan> - "
"<level>{message}</level>"
),
colorize=True,
)Available format fields:
| Field | Description |
|---|---|
{time} | Timestamp (customizable with {time:YYYY-MM-DD}) |
{level} | Level name (DEBUG, INFO, WARNING, ERROR, CRITICAL) |
{message} | The log message |
{name} | Module name |
{function} | Function name |
{line} | Line number |
{file} | File name |
{process} | Process ID |
{thread} | Thread ID |
{elapsed} | Time since program start |
{extra} | Extras from bind/contextualize |
Add custom levels:
from loguru import logger
logger.level("SECURITY", no=35, color="<red><bold>", icon="🔒")
logger.log("SECURITY", "Suspicious activity detected")
# Output with red bold SECURITY tagFilter by level per handler:
logger.add("all.log", level="DEBUG")
logger.add("errors.log", level="ERROR")
logger.add("info_only.log", filter=lambda record: record["level"].name == "INFO")Fix 7: Exception Handling
from loguru import logger
try:
risky_operation()
except Exception as e:
logger.exception("Failed during risky operation")
# logs exception with full traceback@logger.catch decorator — auto-log exceptions without try/except:
from loguru import logger
@logger.catch
def risky_function():
return 1 / 0
risky_function()
# ZeroDivisionError is logged with full traceback, not raised@logger.catch(reraise=True) — log AND re-raise:
@logger.catch(reraise=True)
def risky_function():
return 1 / 0
# Log happens, then exception propagatesBetter exception formatting with diagnose=True (default in Loguru):
logger.add(
sys.stderr,
backtrace=True, # Show full call stack
diagnose=True, # Show variable values at each frame
)diagnose=True prints local variables at each stack frame — amazing for debugging. But it can leak sensitive values in logs. Disable in production:
logger.add(
sys.stderr,
backtrace=True,
diagnose=False, # Don't print variable values
)Fix 8: Testing with Loguru
Loguru doesn’t use stdlib logging, so pytest.caplog fixture doesn’t capture its output. Use loguru.logger.add() for testing:
# conftest.py
import pytest
from loguru import logger
@pytest.fixture
def caplog(caplog):
"""Intercept loguru messages into pytest's caplog."""
class PropagateHandler(logging.Handler):
def emit(self, record):
logging.getLogger(record.name).handle(record)
handler_id = logger.add(PropagateHandler(), format="{message}")
yield caplog
logger.remove(handler_id)
# Now pytest's caplog captures loguru output
def test_logging(caplog):
logger.info("test message")
assert "test message" in caplog.textOr collect messages directly:
def test_direct():
messages = []
handler_id = logger.add(
lambda msg: messages.append(msg.strip()),
format="{message}",
level="INFO",
)
try:
logger.info("hello")
logger.warning("warn")
assert "hello" in messages[0]
finally:
logger.remove(handler_id)For pytest fixture patterns that integrate with Loguru, see pytest fixture not found.
Still Not Working?
Loguru vs Structlog vs stdlib logging
- Loguru — Simplest API, sensible defaults, great for most apps. Drop-in replacement for
logging. - structlog — More customizable, processor-chain architecture. Best for JSON-first logging in production.
- stdlib
logging— Widest ecosystem compatibility. Verbose but universal.
Use Loguru for clarity and simplicity. Switch to structlog if you need complex processor pipelines (redaction, format transformation, metrics emission).
FastAPI / Uvicorn Integration
Uvicorn logs through stdlib — use the InterceptHandler to route through Loguru:
import logging
from loguru import logger
import sys
class InterceptHandler(logging.Handler):
def emit(self, record):
level = logger.level(record.levelname).name if record.levelname in logger._core.levels else record.levelno
logger.opt(depth=6, exception=record.exc_info).log(level, record.getMessage())
# Replace uvicorn's loggers
for name in ["uvicorn", "uvicorn.access", "uvicorn.error"]:
logging.getLogger(name).handlers = [InterceptHandler()]
logger.remove()
logger.add(sys.stderr, serialize=True)For Uvicorn-specific logging patterns, see Uvicorn not working. For FastAPI dependency patterns that pair with per-request log binding, see FastAPI dependency injection error.
Async Logging
Loguru’s sinks can be async functions:
import asyncio
from loguru import logger
async def async_sink(message):
# Send to an async log aggregator
await send_to_logging_service(str(message))
logger.remove()
logger.add(async_sink, enqueue=True) # enqueue=True handles the async schedulingFor async runtime issues that interact with Loguru sinks, see Python asyncio not running.
Sending to External Services (Sentry, Datadog)
from loguru import logger
import sentry_sdk
import sys
sentry_sdk.init(dsn="https://[email protected]/...")
def sentry_sink(message):
record = message.record
if record["level"].name in ("ERROR", "CRITICAL"):
sentry_sdk.capture_message(record["message"], level=record["level"].name.lower())
logger.add(sentry_sink, level="ERROR")Datadog via HTTP sink:
import requests
from loguru import logger
def datadog_sink(message):
try:
requests.post(
"https://http-intake.logs.datadoghq.com/api/v2/logs",
json={"message": str(message), "ddsource": "python"},
headers={"DD-API-KEY": "..."},
timeout=5,
)
except requests.RequestException:
pass # Never let logging break the app
logger.add(datadog_sink, enqueue=True, level="INFO")Always set enqueue=True for external sinks — network failures would otherwise stall the main thread.
Removing ANSI Color Codes from File Output
colorize in the format string adds ANSI codes. For file output without them:
logger.add("app.log", colorize=False) # Explicit — no ANSI codes in file
# Or use different formats for different sinks
logger.add(sys.stderr, format="<green>{time}</green> | {message}", colorize=True)
logger.add("app.log", format="{time} | {message}", colorize=False)Files with ANSI codes are unreadable in most tools (less, cat, log aggregators). Always colorize=False for persistent file sinks.
Performance: When Not to Use Loguru
For extremely high-throughput logging (>100k messages/sec), Loguru’s per-record processing has overhead. In those cases:
- Write to a raw file handler directly and parse downstream
- Use structlog with a minimal processor chain
- Sample logs rather than logging every event
For most applications (even high-traffic web servers), Loguru’s overhead is negligible compared to I/O. Profile before optimizing.
Solo developer based in Japan. Every solution is cross-referenced with official documentation and tested before publishing.
Was this article helpful?
Related Articles
Fix: OpenTelemetry Not Working — Traces Not Appearing, Spans Missing, or Exporter Connection Refused
How to fix OpenTelemetry issues — SDK initialization order, auto-instrumentation setup, OTLP exporter configuration, context propagation, and missing spans in Node.js, Python, and Java.
Fix: aiohttp Not Working — Session Leaks, ClientTimeout, and Connector Errors
How to fix aiohttp errors — RuntimeError session is closed, ClientConnectorError connection refused, SSL verify failure, Unclosed client session warning, server websocket disconnect, and connector pool exhausted.
Fix: Apache Airflow Not Working — DAG Not Found, Task Failures, and Scheduler Issues
How to fix Apache Airflow errors — DAG not appearing in UI, ImportError preventing DAG load, task stuck in running or queued, scheduler not scheduling, XCom too large, connection not found, and database migration errors.
Fix: BeautifulSoup Not Working — Parser Errors, Encoding Issues, and find_all Returns Empty
How to fix BeautifulSoup errors — bs4.FeatureNotFound install lxml, find_all returns empty list, Unicode decode error, JavaScript-rendered content not found, select vs find_all confusion, and slow parsing on large HTML.