When developing Python applications, especially in production environments, the ability to log events, errors, and debug information is crucial. Python's built-in logging module is the standard and most versatile tool for this purpose, offering much more than simple console prints.
In this complete guide, you'll learn everything from basic concepts to advanced logging techniques in Python, including handler configuration, custom formatters, filters, and integration with external monitoring systems.
What is Python Logging?
Logging is the process of recording information about a program's execution. Unlike print(), the logging module offers a hierarchical structure, multiple severity levels, centralized configuration, and the ability to send logs to multiple destinations simultaneously.
With proper logging, you can:
- Monitor application behavior in production
- Debug issues in development environments
- Audit user actions and system events
- Integrate with monitoring tools like ELK, Datadog, New Relic
- Generate performance and availability metrics
Source: Python Documentation - Logging
The 5 Logging Levels
Python defines five logging levels, each with a specific purpose:
import logging
# DEBUG - Detailed information for diagnosis
logging.debug("Variable x = %d", x)
# INFO - Confirmation that things are working
logging.info("User %s logged in", username)
# WARNING - Something unexpected happened, but the app continues
logging.warning("Memory above 80%%")
# ERROR - Serious problem that affected a function
logging.error("Failed to connect to database")
# CRITICAL - Critical error that might stop the application
logging.critical("Authentication system failed!")
Each level has an associated numeric value: DEBUG=10, INFO=20, WARNING=30, ERROR=40, CRITICAL=50. By default, only WARNING and above are displayed.
Source: Real Python - Python Logging
Basic Logging Configuration
The simplest way to start with logging is using the basicConfig() function:
import logging
logging.basicConfig(
level=logging.DEBUG,
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s',
datefmt='%Y-%m-%d %H:%M:%S'
)
logging.info("Logging configured successfully!")
This code creates a root logger with console output. The format includes timestamp, logger name, level, and message.
Dictionary Configuration (Modern Version)
Starting from Python 3.21, you can use dictionaries for configuration:
import logging
import logging.config
config = {
"version": 1,
"disable_existing_loggers": False,
"formatters": {
"standard": {
"format": "%(asctime)s [%(levelname)s] %(name)s: %(message)s"
},
},
"handlers": {
"console": {
"class": "logging.StreamHandler",
"level": "DEBUG",
"formatter": "standard",
"stream": "ext://sys.stdout"
},
"file": {
"class": "logging.FileHandler",
"level": "logging.INFO",
"formatter": "standard",
"filename": "app.log",
"mode": "a"
}
},
"root": {
"level": "INFO",
"handlers": ["console", "file"]
}
}
logging.config.dictConfig(config)
This approach is much more flexible and allows configuration without changing code, making it ideal for production environments.
Creating Custom Loggers
For larger applications, it's recommended to create specific loggers for each module:
import logging
# Create logger for a specific module
logger = logging.getLogger(__name__)
logger.setLevel(logging.DEBUG)
# Add handler if it doesn't exist
if not logger.handlers:
handler = logging.StreamHandler()
formatter = logging.Formatter(
'%(name)s - %(levelname)s - %(message)s'
)
handler.setFormatter(formatter)
logger.addHandler(handler)
# Use the logger
logger.info("Processing user data")
logger.error("Validation failed")
Using __name__ as the argument for getLogger() creates a logger with the current module name, which makes identifying log sources much easier.
Handlers: Log Destinations
Handlers define where logs will be sent. Python offers several types:
StreamHandler - Console and Files
handler = logging.StreamHandler() # stdout (default)
handler = logging.FileHandler('app.log', encoding='utf-8') # file
handler = logging.FileHandler('error.log', mode='a') # append mode
RotatingFileHandler - Logs with Rotation
To avoid enormous log files, use rotation:
from logging.handlers import RotatingFileHandler
handler = RotatingFileHandler(
'app.log',
maxBytes=10 * 1024 * 1024, # 10 MB
backupCount=5 # keep 5 backup files
)
When the file reaches 10 MB, it's renamed and a new one is created. The backupCount parameter defines how many old files are kept.
Source: Python Logging Handlers
TimedRotatingFileHandler - Time-Based Rotation
from logging.handlers import TimedRotatingFileHandler
handler = TimedRotatingFileHandler(
'app.log',
when='midnight', # rotates at midnight
interval=1,
backupCount=30 # keep 30 days of logs
)
SysLogHandler - Send to System
To send logs to syslog servers:
from logging.handlers import SysLogHandler
handler = SysLogHandler(address=('localhost', 514))
HTTPHandler - Send to API
To send logs to external services:
from logging.handlers import HTTPHandler
handler = HTTPHandler(
'api.monitoring.com',
'/logs',
method='POST'
)
Formatters: Formatting Output
Formatters define the layout of log messages. Python offers several attributes:
formatter = logging.Formatter(
'%(asctime)s - %(name)s - %(levelname)s - %(funcName)s:%(lineno)d - %(message)s',
datefmt='%Y-%m-%d %H:%M:%S'
)
Available attributes:
%(asctime)s- Formatted timestamp%(name)s- Logger name%(levelname)s- Log level (DEBUG, INFO, etc)%(levelno)s- Level number%(message)s- The message%(filename)s- Filename%(funcName)s- Function name%(lineno)d- Line number%(process)d- Process ID%(thread)d- Thread ID%(pathname)s- Full file path
Creating a Custom Formatter
class ColoredFormatter(logging.Formatter):
"""Formatter with colors for console"""
grey = "\x1b[38;21m"
blue = "\x1b[38;5;39m"
yellow = "\x1b[38;5;226m"
red = "\x1b[38;5;196m"
bold_red = "\x1b[31;1m"
reset = "\x1b[0m"
FORMATS = {
logging.DEBUG: grey + "%(message)s" + reset,
logging.INFO: blue + "%(message)s" + reset,
logging.WARNING: yellow + "%(message)s" + reset,
logging.ERROR: red + "%(message)s" + reset,
logging.CRITICAL: bold_red + "%(message)s" + reset
}
def format(self, record):
log_fmt = self.FORMATS.get(record.levelno)
formatter = logging.Formatter(log_fmt)
return formatter.format(record)
This formatter adds colors to console logs, making it easier to visually identify the level of each message.
Filters: Controlling Log Flow
Filters allow more granular control over which messages are logged:
class SensitiveDataFilter(logging.Filter):
"""Filter that removes sensitive data from logs"""
SENSITIVE_PATTERNS = [
r'\b\d{3}-\d{2}-\d{4}\b', # CPF
r'\bpassword[=:]\S+', # Password
r'\bapi_key[=:]\S+', # API Key
]
def filter(self, record):
import re
message = record.getMessage()
for pattern in self.SENSITIVE_PATTERNS:
message = re.sub(pattern, '***REDACTED***', message)
record.msg = message
return True
# Add filter to handler
handler.addFilter(SensitiveDataFilter())
This filter is essential for GDPR/LGPD compliance, removing personal information from logs.
Filter by Module
class ModuleFilter(logging.Filter):
"""Filter that allows only specific modules"""
def __init__(self, allowed_modules):
super().__init__()
self.allowed_modules = allowed_modules
def filter(self, record):
return record.name in self.allowed_modules or record.levelno >= logging.ERROR
Logging in Web Applications (Django/Flask)
Django Configuration
# settings.py
LOGGING = {
'version': 1,
'disable_existing_loggers': False,
'formatters': {
'verbose': {
'format': '{levelname} {asctime} {module} {message}',
'style': '{',
},
},
'handlers': {
'console': {
'class': 'logging.StreamHandler',
'formatter': 'verbose',
},
'file': {
'class': 'logging.FileHandler',
'filename': 'django.log',
'formatter': 'verbose',
},
},
'root': {
'handlers': ['console', 'file'],
'level': 'INFO',
},
'loggers': {
'django': {
'handlers': ['file'],
'level': 'WARNING',
'propagate': False,
},
'myapp': {
'handlers': ['console', 'file'],
'level': 'DEBUG',
'propagate': False,
},
},
}
Flask Configuration
import logging
from logging.handlers import RotatingFileHandler
app = Flask(__name__)
if not app.debug:
file_handler = RotatingFileHandler(
'flask.log',
maxBytes=10240000,
backupCount=10
)
file_handler.setFormatter(logging.Formatter(
'%(asctime)s %(levelname)s: %(message)s [in %(pathname)s:%(lineno)d]'
))
file_handler.setLevel(logging.INFO)
app.logger.addHandler(file_handler)
app.logger.setLevel(logging.INFO)
app.logger.info('Flask application started')
Source: DigitalOcean - Python Logging
Async Logging
For high-performance applications, consider async logging:
import logging
from logging.handlers import QueueHandler
import queue
import threading
# Create queue for logs
log_queue = queue.Queue(-1)
# Handler that sends to queue
queue_handler = QueueHandler(log_queue)
# Thread that processes queue
def log_writer():
while True:
record = log_queue.get()
if record is None:
break
logger = logging.getLogger(record.name)
logger.handle(record)
# Start write thread
writer_thread = threading.Thread(target=log_writer, daemon=True)
writer_thread.start()
# Configure root logger with QueueHandler
root_logger = logging.getLogger()
root_logger.addHandler(queue_handler)
root_logger.setLevel(logging.DEBUG)
# Now all logging is async
logging.info("Async logging!")
This approach is especially useful in high-traffic web applications where logging I/O can become a bottleneck.
Logging Best Practices
- Use levels correctly: Don't use DEBUG in production, or ERROR for informational messages.
- Include context: Add request IDs, usernames, relevant data.
- Avoid sensitive data: Never log passwords, tokens, or personal data without encryption.
- Be consistent: Use the same format throughout the application.
- Configure rotation: Avoid logs that consume all disk space.
- Monitor your logs: Use tools like ELK, Datadog, Sentry.
- Document your configuration: All configuration should be documented.
Structured Logger Example
import logging
import json
from datetime import datetime
class JSONFormatter(logging.Formatter):
"""Formatter that outputs JSON"""
def format(self, record):
log_data = {
"timestamp": datetime.utcnow().isoformat(),
"level": record.levelname,
"logger": record.name,
"message": record.getMessage(),
"module": record.module,
"function": record.funcName,
"line": record.lineno
}
if record.exc_info:
log_data["exception"] = self.formatException(record.exc_info)
return json.dumps(log_data)
JSON logs are ideal for processing by monitoring and log aggregation systems.
Source: Loggly - Ultimate Guide to Python Logging
Logging and Exception Handling
An essential practice is using logging with exception handling:
import logging
import traceback
logger = logging.getLogger(__name__)
try:
result = process_data(data)
except Exception as e:
logger.error(
"Failed to process data: %s\n%s",
str(e),
traceback.format_exc()
)
raise
# Cleaner way with logging.exception()
try:
result = process_data(data)
except Exception:
logger.exception("Failed to process data")
raise
The logging.exception() method automatically includes the full traceback.
Integration with Monitoring Systems
Sentry
import sentry_sdk
from sentry_sdk.integrations.logging import SentryHandler
sentry_sdk.init(
dsn="YOUR_SENTRY_DSN",
integrations=[LoggingIntegration()]
)
handler = SentryHandler()
logging.root.addHandler(handler)
Datadog
from datadog import DogStatsd
statsd = DogStatsd(host="localhost", port=8125)
class DatadogHandler(logging.Handler):
def emit(self, record):
if record.levelno >= logging.ERROR:
statsd.increment("log.error", tags=[
f"logger:{record.name}",
f"level:{record.levelname}"
])
logging.root.addHandler(DatadogHandler())
Conclusion
Python's logging module is an extremely powerful tool that goes far beyond simple print(). With the correct configuration of handlers, formatters, and filters, you can create a professional logging system suitable for applications of any size.
Remember best practices: use appropriate levels, include relevant context, avoid sensitive data, and configure log rotation. With a good logging system, debugging and monitoring become much more efficient.
Keep learning with Universo Python's free guides: explore exception handling, automated testing with pytest, and creating APIs with FastAPI to elevate your development skills!