Logging Best Practices in Python
If you have already learned the basics of logging in Python, you might be using it in your projects. But are you using it effectively in production? This guide covers patterns and techniques that make logging practical for real-world applications.
Logger Hierarchies
Python loggers form a hierarchy. When you call getLogger(__name__), you create a logger named after your module. This matters because loggers inherit settings from their parents.
import logging
# Create parent logger
parent = logging.getLogger("myapp")
parent.setLevel(logging.INFO)
parent.addHandler(logging.FileHandler("app.log"))
# Create child logger - inherits parent's handlers
child = logging.getLogger("myapp.processing")
child.info("This message goes to app.log too")
The dot in “myapp.processing” makes it a child of “myapp”. You can control entire sections of your application by configuring parent loggers.
Effective Log Levels
Setting the right log level is an art. Here is a practical approach:
- DEBUG — Trace every step during development. What function was called? What values changed?
- INFO — Normal operations. “Server started”, “User logged in”, “Task completed”
- WARNING — Something odd happened but the code can continue. “Config file missing, using defaults”
- ERROR — Something failed but the application can keep running. “Could not parse response”
- CRITICAL — The application cannot continue. “Database unavailable”
In development, run at DEBUG. In production, INFO or WARNING typically works best. Too much logging overwhelms; too little hides problems.
Rotating File Handlers
Single log files grow forever. Use rotating handlers to manage size:
import logging
from logging.handlers import RotatingFileHandler
handler = RotatingFileHandler(
"app.log",
maxBytes=5_000_000, # 5 MB
backupCount=5 # Keep 5 old files
)
logging.basicConfig(
level=logging.INFO,
handlers=[handler]
)
When the log reaches 5 MB, it rotates to app.log.1, and a new app.log starts. The backupCount limits how many old files stay around.
For time-based rotation:
from logging.handlers import TimedRotatingFileHandler
handler = TimedRotatingFileHandler(
"app.log",
when="midnight", # Rotate at midnight
interval=1, # Every day
backupCount=30 # Keep 30 days
)
This creates a new log file each day and automatically cleans up old ones.
Custom Formatters
The default format is fine for development, but production benefits from more detail:
formatter = logging.Formatter(
fmt="%(asctime)s | %(levelname)-8s | %(name)s:%(lineno)d | %(message)s",
datefmt="%Y-%m-%d %H:%M:%S"
)
handler = logging.FileHandler("app.log")
handler.setFormatter(formatter)
Output looks like:
2026-03-13 10:30:45 | INFO | myapp:42 | Server started on port 8000
You can create different formatters for different handlers:
# Detailed format for file
file_formatter = logging.Formatter(
"%(asctime)s - %(name)s - %(levelname)s - %(funcName)s:%(lineno)d - %(message)s"
)
# Simple format for console
console_formatter = logging.Formatter("%(levelname)s: %(message)s")
file_handler.setFormatter(file_formatter)
console_handler.setFormatter(console_formatter)
Filtering Messages
Handlers can filter messages based on content or context:
class SensitiveDataFilter(logging.Filter):
def filter(self, record):
# Redact sensitive fields
if hasattr(record, "msg"):
record.msg = record.msg.replace("password=xxx", "password=***")
record.msg = record.msg.replace("api_key=xxx", "api_key=***")
return True
handler = logging.FileHandler("app.log")
handler.addFilter(SensitiveDataFilter())
You can also filter by logger name:
# Only log from myapp.database
db_handler = logging.FileHandler("db.log")
db_handler.addFilter(logging.Filter("myapp.database"))
Structured Logging
Plain text logs are hard to parse programmatically. Structured logging uses JSON:
import json
import logging
class JSONFormatter(logging.Formatter):
def format(self, record):
log_obj = {
"timestamp": self.formatTime(record),
"level": record.levelname,
"logger": record.name,
"message": record.getMessage(),
"module": record.module,
"function": record.funcName,
"line": record.lineno
}
if record.exc_info:
log_obj["exception"] = self.formatException(record.exc_info)
return json.dumps(log_obj)
handler = logging.FileHandler("app.json")
handler.setFormatter(JSONFormatter())
Output is machine-parseable:
{"timestamp": "2026-03-13 10:30:45", "level": "INFO", "logger": "myapp", "message": "User created", "module": "handlers", "function": "create_user", "line": 42}
Many production systems like ELK Stack work better with structured logs.
Using LoggerAdapters for Context
Add context to logs without changing every call:
import logging
class RequestLoggerAdapter(logging.LoggerAdapter):
def process(self, msg, kwargs):
return f"[{self.extra['request_id']}] {msg}", kwargs
logger = logging.getLogger("myapp")
adapter = RequestLoggerAdapter(logger, {"request_id": "abc123"})
adapter.info("Processing request") # Logs: [abc123] Processing request
This is useful for adding request IDs, user IDs, or other contextual information.
Configuration from Files
Keep logging configuration separate from code using a config file:
# logging.conf
[loggers]
keys=root,myapp
[handlers]
keys=console,file
[formatters]
keys=detailed
[logger_root]
level=INFO
handlers=console
[logger_myapp]
level=DEBUG
handlers=file
qualname=myapp
propagate=0
[handler_console]
class=StreamHandler
level=INFO
formatter=detailed
args=(sys.stdout,)
[handler_file]
class=FileHandler
level=DEBUG
formatter=detailed
args=("app.log",)
[formatter_detailed]
format=%(asctime)s - %(name)s - %(levelname)s - %(message)s
Load it with:
import logging.config
logging.config.fileConfig("logging.conf")
This makes it easy to change logging behavior without modifying code.
Third-Party Tools
The standard library covers most needs, but third-party tools add power:
python-json-logger — Ready-made JSON formatting
from pythonjsonlogger import jsonlogger
handler = logging.FileHandler()
handler.setFormatter(jsonlogger.JsonFormatter())
loguru — Simpler API with automatic exception handling
from loguru import logger
logger.add("app.log", rotation="500 MB")
logger.info("Hello from loguru")
structlog — Structured logging made easy
import structlog
logger = structlog.get_logger()
logger.info("user_created", user_id=123, email="test@example.com")
Evaluate whether these add enough value to warrant a dependency.
Performance Considerations
Logging has overhead. Here is how to minimize it:
Check level before building expensive messages:
# Bad - string formatting always happens
logger.info(f"Processing {len(items)} items: {expensive_function()}")
# Good - only formats if INFO is enabled
if logger.isEnabledFor(logging.INFO):
logger.info(f"Processing {len(items)} items: {expensive_function()}")
For very high throughput code, consider:
- Async handlers (using
QueueHandler) - Sampling (log every Nth occurrence)
- Conditional logging based on runtime conditions
Testing with Logs
Logs are part of your code and should be tested:
import logging
from io import StringIO
def test_logging_output():
log_capture = StringIO()
handler = logging.StreamHandler(log_capture)
logger = logging.getLogger("test")
logger.addHandler(handler)
logger.setLevel(logging.INFO)
logger.info("Test message")
assert "Test message" in log_capture.getvalue()
You can capture logs to verify error paths are logged correctly.
Getting Started
Start with basic logging, then add these patterns as your needs grow:
- Configure rotation before you need it
- Add structured logging when you need searchability
- Use adapters for request context in web applications
- Keep sensitive data out of logs
The logging module is flexible enough to handle simple scripts and complex production systems. Invest time in learning it well.
See Also
- logging-module — Full reference for the logging module
- logging-guide — Beginner-friendly introduction to logging
- os-module — File and system operations