Logger
Logger provides an opinionated logger with output structured as JSON.
Key features¶
- Capture key fields from Lambda context, cold start and structures logging output as JSON
- Log Lambda event when instructed (disabled by default)
- Log sampling enables DEBUG log level for a percentage of requests (disabled by default)
- Append additional keys to structured log at any point in time
Getting started¶
Logger requires two settings:
Setting | Description | Environment variable | Constructor parameter |
---|---|---|---|
Logging level | Sets how verbose Logger should be (INFO, by default) | LOG_LEVEL |
level |
Service | Sets service key that will be present across all log statements | POWERTOOLS_SERVICE_NAME |
service |
Example using AWS Serverless Application Model (SAM)
1 2 3 4 5 6 7 8 9 |
|
1 2 3 |
|
Standard structured keys¶
Your Logger will include the following keys to your structured logging, by default:
Key | Type | Example | Description |
---|---|---|---|
timestamp | str | "2020-05-24 18:17:33,774" | Timestamp of actual log statement |
level | str | "INFO" | Logging level |
location | str | "collect.handler:1" | Source code location where statement was executed |
service | str | "payment" | Service name defined. "service_undefined" will be used if unknown |
sampling_rate | int | 0.1 | Debug logging sampling rate in percentage e.g. 10% in this case |
message | any | "Collecting payment" | Log statement value. Unserializable JSON values will be casted to string |
xray_trace_id | str | "1-5759e988-bd862e3fe1be46a994272793" | X-Ray Trace ID when Lambda function has enabled Tracing |
Capturing Lambda context info¶
You can enrich your structured logs with key Lambda context information via inject_lambda_context
.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 |
|
When used, this will include the following keys:
Key | Type | Example |
---|---|---|
cold_start | bool | false |
function_name | str | "example-powertools-HelloWorldFunction-1P1Z6B39FLU73" |
function_memory_size | int | 128 |
function_arn | str | "arn:aws:lambda:eu-west-1:012345678910:function:example-powertools-HelloWorldFunction-1P1Z6B39FLU73" |
function_request_id | str | "899856cb-83d1-40d7-8611-9e78f15f32f4" |
Logging incoming event¶
When debugging in non-production environments, you can instruct Logger to log the incoming event with log_event
param or via POWERTOOLS_LOGGER_LOG_EVENT
env var.
Warning
This is disabled by default to prevent sensitive info being logged.
1 2 3 4 5 6 7 |
|
Setting a Correlation ID¶
New in 1.12.0
You can set a Correlation ID using correlation_id_path
param by passing a JMESPath expression.
1 2 3 4 5 6 7 8 |
|
1 2 3 4 5 |
|
1 2 3 4 5 6 7 8 9 |
|
We provide built-in JMESPath expressions for known event sources, where either a request ID or X-Ray Trace ID are present.
1 2 3 4 5 6 7 8 9 |
|
1 2 3 4 5 |
|
1 2 3 4 5 6 7 8 9 |
|
Appending additional keys¶
You can append additional keys using either mechanism:
- Persist new keys across all future log messages via
structure_logs
method - Add additional keys on a per log message basis via
extra
parameter
structure_logs method¶
You can append your own keys to your existing Logger via structure_logs(append=True, **kwargs)
method.
Omitting
append=True
will reset the existing structured logs to standard keys + keys provided as arguments
1 2 3 4 5 6 7 8 9 |
|
1 2 3 4 5 6 7 8 9 |
|
Logger will automatically reject any key with a None value
If you conditionally add keys depending on the payload, you can use the highlighted line above as an example.
This example will add order_id
if its value is not empty, and in subsequent invocations where order_id
might not be present it'll remove it from the logger.
extra parameter¶
New in 1.10.0
Extra parameter is available for all log levels' methods, as implemented in the standard logging library - e.g. logger.info, logger.warning
.
It accepts any dictionary, and all keyword arguments will be added as part of the root structure of the logs for that log statement.
Any keyword argument added using extra
will not be persisted for subsequent messages.
1 2 3 4 5 |
|
1 2 3 4 5 6 7 8 9 |
|
set_correlation_id method¶
New in 1.12.0
You can set a correlation_id to your existing Logger via set_correlation_id(value)
method by passing any string value.
1 2 3 4 5 6 7 8 |
|
1 2 3 4 5 |
|
1 2 3 4 5 6 7 8 9 |
|
Alternatively, you can combine Data Classes utility with Logger to use dot notation object:
1 2 3 4 5 6 7 8 9 10 |
|
1 2 3 4 5 |
|
1 2 3 4 5 6 7 8 9 |
|
Logging exceptions¶
When logging exceptions, Logger will add new keys named exception_name
and exception
with the full traceback as a string.
Tip
New in 1.12.0
You can use your preferred Log Analytics tool to enumerate exceptions across all your services using exception_name
key.
1 2 3 4 5 6 7 |
|
1 2 3 4 5 6 7 8 9 10 |
|
Advanced¶
Reusing Logger across your code¶
Logger supports inheritance via child
parameter. This allows you to create multiple Loggers across your code base, and propagate changes such as new keys to all Loggers.
1 2 3 4 5 6 7 8 |
|
1 2 3 4 5 6 |
|
In this example, Logger
will create a parent logger named payment
and a child logger named payment.shared
. Changes in either parent or child logger will be propagated bi-directionally.
Child loggers will be named after the following convention {service}.{filename}
If you forget to use child
param but the service
name is the same of the parent, we will return the existing parent Logger
instead.
Sampling debug logs¶
Use sampling when you want to dynamically change your log level to DEBUG based on a percentage of your concurrent/cold start invocations.
You can set using POWERTOOLS_LOGGER_SAMPLE_RATE
env var or explicitly with sample_rate
parameter: Values range from 0.0
to 1
(100%)
When is this useful?
Take for example a sudden increase in concurrency. When looking into logs you might not have enough information, and while you can adjust log levels it might not happen again.
This feature takes into account transient issues where additional debugging information can be useful.
Sampling decision happens at the Logger class initialization. This means sampling may happen significantly more or less than you expect if you have a steady low number of invocations and thus few cold starts.
Note
If you want Logger to calculate sampling upon every invocation, please open a feature request.
1 2 3 4 5 6 7 8 9 10 |
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 |
|
Migrating from other Loggers¶
If you're migrating from other Loggers, there are few key points to be aware of: Service parameter, Inheriting Loggers, Overriding Log records, and Logging exceptions.
The service parameter¶
Service is what defines the Logger name, including what the Lambda function is responsible for, or part of (e.g payment service).
For Logger, the service
is the logging key customers can use to search log operations for one or more functions - For example, search for all errors, or messages like X, where service is payment.
Logging output example
1 2 3 4 5 6 7 8 9 10 11 12 13 |
|
Inheriting Loggers¶
Python Logging hierarchy happens via the dot notation:
service
,service.child
,service.child_2
For inheritance, Logger uses a child=True
parameter along with service
being the same value across Loggers.
For child Loggers, we introspect the name of your module where Logger(child=True, service="name")
is called, and we name your Logger as {service}.{filename}.
A common issue when migrating from other Loggers is that service
might be defined in the parent Logger (no child param), and not defined in the child Logger:
1 2 3 4 5 6 7 8 9 10 |
|
1 2 3 4 5 6 7 8 9 10 |
|
In this case, Logger will register a Logger named payment
, and a Logger named service_undefined
. The latter isn't inheriting from the parent, and will have no handler, resulting in no message being logged to standard output.
Tip
This can be fixed by either ensuring both has the service
value as payment
, or simply use the environment variable POWERTOOLS_SERVICE_NAME
to ensure service value will be the same across all Loggers when not explicitly set.
Overriding Log records¶
You might want to continue to use the same date formatting style, or override location
to display the package.function_name:line_number
as you previously had.
Logger allows you to either change the format or suppress the following keys altogether at the initialization: location
, timestamp
, level
, xray_trace_id
, and datefmt
. However, sampling_rate
key is part of the specification and cannot be suppressed.
xray_trace_id
logging key
This key is only added if X-Ray Tracing is enabled for your Lambda function. Once enabled, this key allows the integration between CloudWatch Logs and Service Lens.
We honour standard logging library string formats.
1 2 3 4 5 6 7 |
|
1 2 3 4 5 6 7 8 |
|
Reordering log records position¶
You can also change the order of the following log record keys via the log_record_order
parameter: level
, location
, message
, xray_trace_id
, and timestamp
1 2 3 4 5 6 7 |
|
1 2 3 4 5 6 7 8 |
|
Setting timestamp to UTC¶
By default, this Logger and standard logging library emits records using local time timestamp. You can override this behaviour by updating the current converter set in our formatter:
1 2 3 4 5 6 7 8 9 10 11 |
|
Testing your code¶
When unit testing your code that makes use of inject_lambda_context
decorator, you need to pass a dummy Lambda Context, or else Logger will fail.
This is a Pytest sample that provides the minimum information necessary for Logger to succeed:
Note that dataclasses are available in Python 3.7+ only.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 |
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 |
|
Pytest live log feature¶
Pytest Live Log feature duplicates emitted log messages in order to style log statements according to their levels, for this to work use POWERTOOLS_LOG_DEDUPLICATION_DISABLED
env var.
1 |
|
Warning
This feature should be used with care, as it explicitly disables our ability to filter propagated messages to the root logger (if configured).
Built-in Correlation ID expressions¶
New in 1.12.0
You can use any of the following built-in JMESPath expressions as part of inject_lambda_context decorator.
Escaping necessary for the -
character
Any object key named with -
must be escaped, for example request.headers."x-amzn-trace-id"
.
Name | Expression | Description |
---|---|---|
API_GATEWAY_REST | "requestContext.requestId" |
API Gateway REST API request ID |
API_GATEWAY_HTTP | "requestContext.requestId" |
API Gateway HTTP API request ID |
APPSYNC_RESOLVER | 'request.headers."x-amzn-trace-id"' |
AppSync X-Ray Trace ID |
APPLICATION_LOAD_BALANCER | 'headers."x-amzn-trace-id"' |
ALB X-Ray Trace ID |
EVENT_BRIDGE | "id" |
EventBridge Event ID |
FAQ¶
How can I enable boto3 and botocore library logging?
You can enable the botocore
and boto3
logs by using the set_stream_logger
method, this method will add a stream handler
for the given name and level to the logging module. By default, this logs all boto3 messages to stdout.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
|
What's the difference between structure_log
and extra
?
Keys added with structure_log
will persist across multiple log messages while keys added via extra
will only be available in a given log message operation.
Here's an example where we persist payment_id
not request_id
. Note that payment_id
remains in both log messages while booking_id
is only available in the first message.
1 2 3 4 5 6 7 8 9 10 11 12 |
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 |
|