Logger
Logger provides an opinionated logger with output structured as JSON.
Key features¶
- Capture key fields from Lambda context, cold start and structures logging output as JSON
- Log Lambda event when instructed (disabled by default)
- Log sampling enables DEBUG log level for a percentage of requests (disabled by default)
- Append additional keys to structured log at any point in time
Getting started¶
Tip
All examples shared in this documentation are available within the project repository.
Logger requires two settings:
Setting | Description | Environment variable | Constructor parameter |
---|---|---|---|
Logging level | Sets how verbose Logger should be (INFO, by default) | LOG_LEVEL |
level |
Service | Sets service key that will be present across all log statements | POWERTOOLS_SERVICE_NAME |
service |
AWS Serverless Application Model (SAM) example | |
---|---|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 |
|
Standard structured keys¶
Your Logger will include the following keys to your structured logging:
Key | Example | Note |
---|---|---|
level: str |
INFO |
Logging level |
location: str |
collect.handler:1 |
Source code location where statement was executed |
message: Any |
Collecting payment |
Unserializable JSON values are casted as str |
timestamp: str |
2021-05-03 10:20:19,650+0200 |
Timestamp with milliseconds, by default uses local timezone |
service: str |
payment |
Service name defined, by default service_undefined |
xray_trace_id: str |
1-5759e988-bd862e3fe1be46a994272793 |
When tracing is enabled, it shows X-Ray Trace ID |
sampling_rate: float |
0.1 |
When enabled, it shows sampling rate in percentage e.g. 10% |
exception_name: str |
ValueError |
When logger.exception is used and there is an exception |
exception: str |
Traceback (most recent call last).. |
When logger.exception is used and there is an exception |
Capturing Lambda context info¶
You can enrich your structured logs with key Lambda context information via inject_lambda_context
.
1 2 3 4 5 6 7 8 9 10 11 12 13 |
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 |
|
When used, this will include the following keys:
Key | Example |
---|---|
cold_start: bool |
false |
function_name str |
example-powertools-HelloWorldFunction-1P1Z6B39FLU73 |
function_memory_size: int |
128 |
function_arn: str |
arn:aws:lambda:eu-west-1:012345678910:function:example-powertools-HelloWorldFunction-1P1Z6B39FLU73 |
function_request_id: str |
899856cb-83d1-40d7-8611-9e78f15f32f4 |
Logging incoming event¶
When debugging in non-production environments, you can instruct Logger to log the incoming event with log_event
param or via POWERTOOLS_LOGGER_LOG_EVENT
env var.
Warning
This is disabled by default to prevent sensitive info being logged
Logging incoming event | |
---|---|
1 2 3 4 5 6 7 8 9 |
|
Setting a Correlation ID¶
You can set a Correlation ID using correlation_id_path
param by passing a JMESPath expression.
Tip
You can retrieve correlation IDs via get_correlation_id
method
1 2 3 4 5 6 7 8 9 10 11 12 |
|
1 2 3 4 5 |
|
1 2 3 4 5 6 7 8 9 10 11 12 13 |
|
set_correlation_id method¶
You can also use set_correlation_id
method to inject it anywhere else in your code. Example below uses Event Source Data Classes utility to easily access events properties.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
|
1 2 3 4 5 |
|
1 2 3 4 5 6 7 8 |
|
Known correlation IDs¶
To ease routine tasks like extracting correlation ID from popular event sources, we provide built-in JMESPath expressions.
1 2 3 4 5 6 7 8 9 10 11 12 13 |
|
1 2 3 4 5 |
|
1 2 3 4 5 6 7 8 9 10 11 12 13 |
|
Appending additional keys¶
Info: Custom keys are persisted across warm invocations
Always set additional keys as part of your handler to ensure they have the latest value, or explicitly clear them with clear_state=True
.
You can append additional keys using either mechanism:
- Persist new keys across all future log messages via
append_keys
method - Add additional keys on a per log message basis as a keyword=value, or via
extra
parameter
append_keys method¶
Warning
append_keys
is not thread-safe, please see RFC.
You can append your own keys to your existing Logger via append_keys(**additional_key_values)
method.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 |
|
1 2 3 4 5 6 7 8 |
|
Tip: Logger will automatically reject any key with a None value
If you conditionally add keys depending on the payload, you can follow the example above.
This example will add order_id
if its value is not empty, and in subsequent invocations where order_id
might not be present it'll remove it from the Logger.
ephemeral metadata¶
You can pass an arbitrary number of keyword arguments (kwargs) to all log level's methods, e.g. logger.info, logger.warning
.
Two common use cases for this feature is to enrich log statements with additional metadata, or only add certain keys conditionally.
Any keyword argument added will not be persisted in subsequent messages.
1 2 3 4 5 6 7 8 9 10 |
|
1 2 3 4 5 6 7 8 |
|
extra parameter¶
Extra parameter is available for all log levels' methods, as implemented in the standard logging library - e.g. logger.info, logger.warning
.
It accepts any dictionary, and all keyword arguments will be added as part of the root structure of the logs for that log statement.
Any keyword argument added using extra
will not be persisted in subsequent messages.
1 2 3 4 5 6 7 8 9 10 11 |
|
1 2 3 4 5 6 7 8 |
|
Removing additional keys¶
You can remove any additional key from Logger state using remove_keys
.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 |
|
Clearing all state¶
Logger is commonly initialized in the global scope. Due to Lambda Execution Context reuse, this means that custom keys can be persisted across invocations. If you want all custom keys to be deleted, you can use clear_state=True
param in inject_lambda_context
decorator.
Tip: When is this useful?
It is useful when you add multiple custom keys conditionally, instead of setting a default None
value if not present. Any key with None
value is automatically removed by Logger.
Danger: This can have unintended side effects if you use Layers
Lambda Layers code is imported before the Lambda handler.
This means that clear_state=True
will instruct Logger to remove any keys previously added before Lambda handler execution proceeds.
You can either avoid running any code as part of Lambda Layers global scope, or override keys with their latest value as part of handler's execution.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
|
1 2 3 4 5 6 7 8 9 10 11 12 13 |
|
1 2 3 4 5 6 7 8 9 10 11 12 |
|
Logging exceptions¶
Use logger.exception
method to log contextual information about exceptions. Logger will include exception_name
and exception
keys to aid troubleshooting and error enumeration.
Tip
You can use your preferred Log Analytics tool to enumerate and visualize exceptions across all your services using exception_name
key.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 |
|
1 2 3 4 5 6 7 8 9 |
|
Uncaught exceptions¶
CAUTION: some users reported a problem that causes this functionality not to work in the Lambda runtime. We recommend that you don't use this feature for the time being.
Logger can optionally log uncaught exceptions by setting log_uncaught_exceptions=True
at initialization.
Logger will replace any exception hook previously registered via sys.excepthook.
What are uncaught exceptions?
It's any raised exception that wasn't handled by the except
statement, leading a Python program to a non-successful exit.
They are typically raised intentionally to signal a problem (raise ValueError
), or a propagated exception from elsewhere in your code that you didn't handle it willingly or not (KeyError
, jsonDecoderError
, etc.).
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
|
1 2 3 4 5 6 7 8 9 |
|
Date formatting¶
Logger uses Python's standard logging date format with the addition of timezone: 2021-05-03 11:47:12,494+0200
.
You can easily change the date format using one of the following parameters:
datefmt
. You can pass any strftime format codes. Use%F
if you need milliseconds.use_rfc3339
. This flag will use a format compliant with both RFC3339 and ISO8601:2022-10-27T16:27:43.738+02:00
Prefer using datetime string formats?
Use use_datetime_directive
flag along with datefmt
to instruct Logger to use datetime
instead of time.strftime
.
1 2 3 4 5 6 7 8 9 |
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
|
Advanced¶
Built-in Correlation ID expressions¶
You can use any of the following built-in JMESPath expressions as part of inject_lambda_context decorator.
Note: Any object key named with -
must be escaped
For example, request.headers."x-amzn-trace-id"
.
Name | Expression | Description |
---|---|---|
API_GATEWAY_REST | "requestContext.requestId" |
API Gateway REST API request ID |
API_GATEWAY_HTTP | "requestContext.requestId" |
API Gateway HTTP API request ID |
APPSYNC_RESOLVER | 'request.headers."x-amzn-trace-id"' |
AppSync X-Ray Trace ID |
APPLICATION_LOAD_BALANCER | 'headers."x-amzn-trace-id"' |
ALB X-Ray Trace ID |
EVENT_BRIDGE | "id" |
EventBridge Event ID |
Reusing Logger across your code¶
Similar to Tracer, a new instance that uses the same service
name - env var or explicit parameter - will reuse a previous Logger instance. Just like logging.getLogger("logger_name")
would in the standard library if called with the same logger name.
Notice in the CloudWatch Logs output how payment_id
appeared as expected when logging in collect.py
.
1 2 3 4 5 6 7 8 9 10 11 12 13 |
|
1 2 3 4 5 6 7 |
|
1 2 3 4 5 6 7 8 9 10 11 12 13 |
|
Note: About Child Loggers
Coming from standard library, you might be used to use logging.getLogger(__name__)
. This will create a new instance of a Logger with a different name.
In Powertools, you can have the same effect by using child=True
parameter: Logger(child=True)
. This creates a new Logger instance named after service.<module>
. All state changes will be propagated bi-directionally between Child and Parent.
For that reason, there could be side effects depending on the order the Child Logger is instantiated, because Child Loggers don't have a handler.
For example, if you instantiated a Child Logger and immediately used logger.append_keys/remove_keys/set_correlation_id
to update logging state, this might fail if the Parent Logger wasn't instantiated.
In this scenario, you can either ensure any calls manipulating state are only called when a Parent Logger is instantiated (example above), or refrain from using child=True
parameter altogether.
Sampling debug logs¶
Use sampling when you want to dynamically change your log level to DEBUG based on a percentage of your concurrent/cold start invocations.
You can use values ranging from 0.0
to 1
(100%) when setting POWERTOOLS_LOGGER_SAMPLE_RATE
env var, or sample_rate
parameter in Logger.
Tip: When is this useful?
Let's imagine a sudden spike increase in concurrency triggered a transient issue downstream. When looking into the logs you might not have enough information, and while you can adjust log levels it might not happen again.
This feature takes into account transient issues where additional debugging information can be useful.
Sampling decision happens at the Logger initialization. This means sampling may happen significantly more or less than depending on your traffic patterns, for example a steady low number of invocations and thus few cold starts.
Note
Open a feature request if you want Logger to calculate sampling for every invocation
1 2 3 4 5 6 7 8 9 10 11 12 13 |
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 |
|
LambdaPowertoolsFormatter¶
Logger propagates a few formatting configurations to the built-in LambdaPowertoolsFormatter
logging formatter.
If you prefer configuring it separately, or you'd want to bring this JSON Formatter to another application, these are the supported settings:
Parameter | Description | Default |
---|---|---|
json_serializer |
function to serialize obj to a JSON formatted str |
json.dumps |
json_deserializer |
function to deserialize str , bytes , bytearray containing a JSON document to a Python obj |
json.loads |
json_default |
function to coerce unserializable values, when no custom serializer/deserializer is set | str |
datefmt |
string directives (strftime) to format log timestamp | %Y-%m-%d %H:%M:%S,%F%z , where %F is a custom ms directive |
use_datetime_directive |
format the datefmt timestamps using datetime , not time (also supports the custom %F directive for milliseconds) |
False |
utc |
set logging timestamp to UTC | False |
log_record_order |
set order of log keys when logging | ["level", "location", "message", "timestamp"] |
kwargs |
key-value to be included in log messages | None |
Info
When POWERTOOLS_DEV
env var is present and set to "true"
, Logger's default serializer (json.dumps
) will pretty-print log messages for easier readability.
Pre-configuring Powertools for AWS Lambda (Python) Formatter | |
---|---|
1 2 3 4 5 6 7 8 |
|
Observability providers¶
In this context, an observability provider is an AWS Lambda Partner offering a platform for logging, metrics, traces, etc.
You can send logs to the observability provider of your choice via Lambda Extensions. In most cases, you shouldn't need any custom Logger configuration, and logs will be shipped async without any performance impact.
Built-in formatters¶
In rare circumstances where JSON logs are not parsed correctly by your provider, we offer built-in formatters to make this transition easier.
Provider | Formatter | Notes |
---|---|---|
Datadog | DatadogLogFormatter |
Modifies default timestamp to use RFC3339 by default |
You can use import and use them as any other Logger formatter via logger_formatter
parameter:
Using built-in Logger Formatters | |
---|---|
1 2 3 4 5 |
|
Migrating from other Loggers¶
If you're migrating from other Loggers, there are few key points to be aware of: Service parameter, Inheriting Loggers, Overriding Log records, and Logging exceptions.
The service parameter¶
Service is what defines the Logger name, including what the Lambda function is responsible for, or part of (e.g payment service).
For Logger, the service
is the logging key customers can use to search log operations for one or more functions - For example, search for all errors, or messages like X, where service is payment.
Inheriting Loggers¶
Tip: Prefer Logger Reuse feature over inheritance unless strictly necessary, see caveats.
Python Logging hierarchy happens via the dot notation:
service
,service.child
,service.child_2
For inheritance, Logger uses a child=True
parameter along with service
being the same value across Loggers.
For child Loggers, we introspect the name of your module where Logger(child=True, service="name")
is called, and we name your Logger as {service}.{filename}.
Danger
A common issue when migrating from other Loggers is that service
might be defined in the parent Logger (no child param), and not defined in the child Logger:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
|
1 2 3 4 5 6 7 |
|
In this case, Logger will register a Logger named payment
, and a Logger named service_undefined
. The latter isn't inheriting from the parent, and will have no handler, resulting in no message being logged to standard output.
Tip
This can be fixed by either ensuring both has the service
value as payment
, or simply use the environment variable POWERTOOLS_SERVICE_NAME
to ensure service value will be the same across all Loggers when not explicitly set.
Do this instead:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
|
1 2 3 4 5 6 7 |
|
Overriding Log records¶
You might want to continue to use the same date formatting style, or override location
to display the package.function_name:line_number
as you previously had.
Logger allows you to either change the format or suppress the following keys at initialization: location
, timestamp
, xray_trace_id
.
1 2 3 4 5 6 7 8 9 10 11 |
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 |
|
Reordering log keys position¶
You can change the order of standard Logger keys or any keys that will be appended later at runtime via the log_record_order
parameter.
1 2 3 4 5 6 7 8 9 10 11 |
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 |
|
Setting timestamp to UTC¶
By default, this Logger and standard logging library emits records using local time timestamp. You can override this behavior via utc
parameter:
1 2 3 4 5 6 7 |
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
|
Custom function for unserializable values¶
By default, Logger uses str
to handle values non-serializable by JSON. You can override this behavior via json_default
parameter by passing a Callable:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 |
|
1 2 3 4 5 6 7 8 9 10 |
|
Bring your own handler¶
By default, Logger uses StreamHandler and logs to standard output. You can override this behavior via logger_handler
parameter:
Configure Logger to output to a file | |
---|---|
1 2 3 4 5 6 7 8 9 10 11 |
|
Bring your own formatter¶
By default, Logger uses LambdaPowertoolsFormatter that persists its custom structure between non-cold start invocations. There could be scenarios where the existing feature set isn't sufficient to your formatting needs.
Info
The most common use cases are remapping keys by bringing your existing schema, and redacting sensitive information you know upfront.
For these, you can override the serialize
method from LambdaPowertoolsFormatter.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
|
1 2 3 4 5 6 7 |
|
The log
argument is the final log record containing our standard keys, optionally Lambda context keys, and any custom key you might have added via append_keys or the extra parameter.
For exceptional cases where you want to completely replace our formatter logic, you can subclass BasePowertoolsFormatter
.
Warning
You will need to implement append_keys
, clear_state
, override format
, and optionally remove_keys
to keep the same feature set Powertools for AWS Lambda (Python) Logger provides. This also means keeping state of logging keys added.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 |
|
1 2 3 4 5 6 7 8 9 10 |
|
Bring your own JSON serializer¶
By default, Logger uses json.dumps
and json.loads
as serializer and deserializer respectively. There could be scenarios where you are making use of alternative JSON libraries like orjson.
As parameters don't always translate well between them, you can pass any callable that receives a dict
and return a str
:
Using Rust orjson library as serializer | |
---|---|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 |
|
Testing your code¶
Inject Lambda Context¶
When unit testing your code that makes use of inject_lambda_context
decorator, you need to pass a dummy Lambda Context, or else Logger will fail.
This is a Pytest sample that provides the minimum information necessary for Logger to succeed:
Note that dataclasses are available in Python 3.7+ only.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 |
|
1 2 3 4 5 6 7 8 9 10 11 |
|
Tip
Check out the built-in Pytest caplog fixture to assert plain log messages
Pytest live log feature¶
Pytest Live Log feature duplicates emitted log messages in order to style log statements according to their levels, for this to work use POWERTOOLS_LOG_DEDUPLICATION_DISABLED
env var.
Disabling log deduplication to use Pytest live log | |
---|---|
1 |
|
Warning
This feature should be used with care, as it explicitly disables our ability to filter propagated messages to the root logger (if configured).
FAQ¶
How can I enable boto3 and botocore library logging?¶
You can enable the botocore
and boto3
logs by using the set_stream_logger
method, this method will add a stream handler
for the given name and level to the logging module. By default, this logs all boto3 messages to stdout.
Enabling AWS SDK logging | |
---|---|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 |
|
How can I enable Powertools for AWS Lambda (Python) logging for imported libraries?¶
You can copy the Logger setup to all or sub-sets of registered external loggers. Use the copy_config_to_registered_logger
method to do this.
Tip
To help differentiate between loggers, we include the standard logger name
attribute for all loggers we copied configuration to.
By default all registered loggers will be modified. You can change this behavior by providing include
and exclude
attributes. You can also provide optional log_level
attribute external loggers will be configured with.
Cloning Logger config to all other registered standard loggers | |
---|---|
1 2 3 4 5 6 7 8 9 10 11 |
|
How can I add standard library logging attributes to a log record?¶
The Python standard library log records contains a large set of attributes, however only a few are included in Powertools for AWS Lambda (Python) Logger log record by default.
You can include any of these logging attributes as key value arguments (kwargs
) when instantiating Logger
or LambdaPowertoolsFormatter
.
You can also add them later anywhere in your code with append_keys
, or remove them with remove_keys
methods.
1 2 3 4 5 6 7 8 9 10 11 12 |
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 |
|
For log records originating from Powertools for AWS Lambda (Python) Logger, the name
attribute will be the same as service
, for log records coming from standard library logger, it will be the name of the logger (i.e. what was used as name argument to logging.getLogger
).
What's the difference between append_keys
and extra
?¶
Keys added with append_keys
will persist across multiple log messages while keys added via extra
will only be available in a given log message operation.
Here's an example where we persist payment_id
not request_id
. Note that payment_id
remains in both log messages while booking_id
is only available in the first message.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 |
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 |
|
How do I aggregate and search Powertools for AWS Lambda (Python) logs across accounts?¶
As of now, ElasticSearch (ELK) or 3rd party solutions are best suited to this task. Please refer to this discussion for more details