Logging
Logging provides an opinionated logger with output structured as JSON.
Key features¶
- Leverages standard logging libraries: SLF4J as the API, and log4j2 or logback for the implementation
- Captures key fields from Lambda context, cold start and structures logging output as JSON
- Optionally logs Lambda request
- Optionally logs Lambda response
- Optionally supports log sampling by including a configurable percentage of DEBUG logs in logging output
- Optionally supports buffering lower level logs and flushing them on error or manually
- Allows additional keys to be appended to the structured log at any point in time
- GraalVM support
Getting started¶
Tip
You can find complete examples in the project repository.
Installation¶
Depending on preference, you must choose to use either log4j2 or logback as your log provider. In both cases you need to configure aspectj to weave the code and make sure the annotation is processed.
Maven¶
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 |
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 |
|
Gradle¶
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 |
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 |
|
Configuration¶
Main environment variables¶
The logging module requires two settings:
Environment variable | Setting | Description |
---|---|---|
POWERTOOLS_LOG_LEVEL |
Logging level | Sets how verbose Logger should be. If not set, will use the Logging configuration |
POWERTOOLS_SERVICE_NAME |
Service | Sets service key that will be included in all log statements (Default is service_undefined ) |
Here is an example using AWS Serverless Application Model (SAM):
1 2 3 4 5 6 7 8 9 10 11 |
|
There are some other environment variables which can be set to modify Logging's settings at a global scope:
Environment variable | Type | Description |
---|---|---|
POWERTOOLS_LOGGER_SAMPLE_RATE |
float | Configure the sampling rate at which DEBUG logs should be included. See sampling rate |
POWERTOOLS_LOG_EVENT |
boolean | Specify if the incoming Lambda event should be logged. See Logging event |
POWERTOOLS_LOG_RESPONSE |
boolean | Specify if the Lambda response should be logged. See logging response |
POWERTOOLS_LOG_ERROR |
boolean | Specify if a Lambda uncaught exception should be logged. See logging exception |
Logging configuration¶
Powertools for AWS Lambda (Java) simply extends the functionality of the underlying library you choose (log4j2 or logback). You can leverage the standard configuration files (log4j2.xml or logback.xml):
With log4j2, we leverage the JsonTemplateLayout
to provide structured logging. A default template is provided in powertools (LambdaJsonLayout.json):
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
|
With logback, we leverage a custom Encoder to provide structured logging:
1 2 3 4 5 6 7 8 9 10 11 12 13 |
|
Log level¶
Log level is generally configured in the log4j2.xml
or logback.xml
. But this level is static and needs a redeployment of the function to be changed.
Powertools for AWS Lambda permits to change this level dynamically thanks to an environment variable POWERTOOLS_LOG_LEVEL
.
We support the following log levels (SLF4J levels): TRACE
, DEBUG
, INFO
, WARN
, ERROR
.
If the level is set to CRITICAL
(supported in log4j but not logback), we revert it back to ERROR
.
If the level is set to any other value, we set it to the default value (INFO
).
AWS Lambda Advanced Logging Controls (ALC)¶
When is it useful?
When you want to set a logging policy to drop informational or verbose logs for one or all AWS Lambda functions, regardless of runtime and logger used.
With AWS Lambda Advanced Logging Controls (ALC), you can enforce a minimum log level that Lambda will accept from your application code.
When enabled, you should keep your own log level and ALC log level in sync to avoid data loss.
Here's a sequence diagram to demonstrate how ALC will drop both INFO
and DEBUG
logs emitted from Logger
, when ALC log level is stricter than Logger
.
sequenceDiagram
participant Lambda service
participant Lambda function
participant Application Logger
Note over Lambda service: AWS_LAMBDA_LOG_LEVEL="WARN"
Note over Application Logger: POWERTOOLS_LOG_LEVEL="DEBUG"
Lambda service->>Lambda function: Invoke (event)
Lambda function->>Lambda function: Calls handler
Lambda function->>Application Logger: logger.error("Something happened")
Lambda function-->>Application Logger: logger.debug("Something happened")
Lambda function-->>Application Logger: logger.info("Something happened")
Lambda service--xLambda service: DROP INFO and DEBUG logs
Lambda service->>CloudWatch Logs: Ingest error logs
Priority of log level settings in Powertools for AWS Lambda¶
We prioritise log level settings in this order:
AWS_LAMBDA_LOG_LEVEL
environment variablePOWERTOOLS_LOG_LEVEL
environment variable- level defined in the
log4j2.xml
orlogback.xml
files
If you set POWERTOOLS_LOG_LEVEL
lower than ALC, we will emit a warning informing you that your messages will be discarded by Lambda.
Note
With ALC enabled, we are unable to increase the minimum log level below the AWS_LAMBDA_LOG_LEVEL
environment variable value, see AWS Lambda service documentation for more details.
Basic Usage¶
To use Lambda Powertools for AWS Lambda Logging, use the @Logging
annotation in your code and the standard SLF4J logger:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 |
|
Standard structured keys¶
Your logs will always include the following keys in your structured logging:
Key | Type | Example | Description |
---|---|---|---|
timestamp | String | "2023-12-01T14:49:19.293Z" | Timestamp of actual log statement, by default uses default AWS Lambda timezone (UTC) |
level | String | "INFO" | Logging level (any level supported by SLF4J (i.e. TRACE , DEBUG , INFO , WARN , ERROR ) |
service | String | "payment" | Service name defined, by default service_undefined |
sampling_rate | float | 0.1 | Debug logging sampling rate in percentage e.g. 10% in this case (logged if not 0) |
message | String | "Collecting payment" | Log statement value. Unserializable JSON values will be casted to string |
xray_trace_id | String | "1-5759e988-bd862e3fe1be46a994272793" | X-Ray Trace ID when Tracing is enabled |
error | Map | { "name": "InvalidAmountException", "message": "Amount must be superior to 0", "stack": "at..." } |
Eventual exception (e.g. when doing logger.error("Error", new InvalidAmountException("Amount must be superior to 0")); ) |
Note
If you emit a log message with a key that matches one of the standard structured keys or one of the additional structured keys, the Logger will log a warning message and ignore the key.
Additional structured keys¶
Logging Lambda context information¶
The following keys will also be added to all your structured logs (unless configured otherwise):
Key | Type | Example | Description |
---|---|---|---|
cold_start | Boolean | false | ColdStart value |
function_name | String | "example-PaymentFunction-1P1Z6B39FLU73" | Name of the function |
function_version | String | "12" | Version of the function |
function_memory_size | String | "512" | Memory configure for the function |
function_arn | String | "arn:aws:lambda:eu-west-1:012345678910:function:example-PaymentFunction-1P1Z6B39FLU73" | ARN of the function |
function_request_id | String | "899856cb-83d1-40d7-8611-9e78f15f32f4"" | AWS Request ID from lambda context |
Logging additional keys¶
Logging a correlation ID¶
You can set a correlation ID using the correlationIdPath
attribute of the @Logging
annotation,
by passing a JMESPath expression,
including our custom JMESPath Functions.
1 2 3 4 5 6 7 8 9 10 11 |
|
1 2 3 4 5 |
|
1 2 3 4 5 6 7 |
|
Known correlation IDs
To ease routine tasks like extracting correlation ID from popular event sources, we provide built-in JMESPath expressions.
1 2 3 4 5 6 7 8 9 10 11 12 13 |
|
1 2 3 4 5 |
|
1 2 3 4 5 6 7 |
|
Custom keys¶
Using StructuredArguments
To append additional keys in your logs, you can use the StructuredArguments
class:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 |
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 |
|
StructuredArguments
provides several options:
entry
to add one key and value into the log structure. Note that value can be any object type.entries
to add multiple keys and values (from a Map) into the log structure. Note that values can be any object type.json
to add a key and raw json (string) as value into the log structure.array
to add one key and multiple values into the log structure. Note that values can be any object type.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 |
|
Use arguments without log placeholders
As shown in the example above, you can use arguments (with StructuredArguments
) without placeholders ({}
) in the message.
If you add the placeholders, the arguments will be logged both as an additional field and also as a string in the log message, using the toString()
method.
1 |
|
1 2 3 4 5 6 7 8 9 10 11 12 13 |
|
1 2 3 4 5 6 7 8 9 10 11 12 13 |
|
You can also combine structured arguments with non structured ones. For example:
1 |
|
1 2 3 4 5 6 7 8 9 10 11 12 13 |
|
Do not use reserved keys in StructuredArguments
If the key name of your structured argument matches any of the standard structured keys or any of the additional structured keys, the Logger will log a warning message and ignore the key. This is to protect you from accidentally overwriting reserved keys such as the log level or Lambda context information.
Using MDC
Mapped Diagnostic Context (MDC) is essentially a Key-Value store. It is supported by the SLF4J API, logback and log4j (known as ThreadContext). You can use the following standard:
MDC.put("key", "value");
Custom keys stored in the MDC are persisted across warm invocations
Always set additional keys as part of your handler method to ensure they have the latest value, or explicitly clear them with clearState=true
.
Do not add reserved keys to MDC
Avoid adding any of the keys listed in standard structured keys and additional structured keys to your MDC. This may cause unindented behavior and will overwrite the context set by the Logger. Unlike with StructuredArguments
, the Logger will not ignore reserved keys set via MDC.
Removing additional keys¶
You can remove additional keys added with the MDC using MDC.remove("key")
.
Clearing state¶
Logger is commonly initialized in the global scope. Due to Lambda Execution Context reuse,
this means that custom keys, added with the MDC can be persisted across invocations. If you want all custom keys to be deleted, you can use
clearState=true
attribute on the @Logging
annotation.
1 2 3 4 5 6 7 8 9 10 11 12 |
|
1 2 3 4 5 6 7 8 |
|
1 2 3 4 5 6 7 8 |
|
clearState
is based on MDC.clear()
. State clearing is automatically done at the end of the execution of the handler if set to true
.
Logging incoming event¶
When debugging in non-production environments, you can instruct the @Logging
annotation to log the incoming event with logEvent
param or via POWERTOOLS_LOGGER_LOG_EVENT
env var.
Warning
This is disabled by default to prevent sensitive info being logged
1 2 3 4 5 6 7 8 9 |
|
Note
If you use this on a RequestStreamHandler, the SDK must duplicate input streams in order to log them.
Logging handler response¶
When debugging in non-production environments, you can instruct the @Logging
annotation to log the response with logResponse
param or via POWERTOOLS_LOGGER_LOG_RESPONSE
env var.
Warning
This is disabled by default to prevent sensitive info being logged
1 2 3 4 5 6 7 8 9 |
|
Note
If you use this on a RequestStreamHandler, Powertools must duplicate output streams in order to log them.
Logging handler uncaught exception¶
By default, AWS Lambda logs any uncaught exception that might happen in the handler. However, this log is not structured
and does not contain any additional context. You can instruct the @Logging
annotation to log this kind of exception
with logError
param or via POWERTOOLS_LOGGER_LOG_ERROR
env var.
Warning
This is disabled by default to prevent double logging
1 2 3 4 5 6 7 8 9 |
|
Advanced¶
Buffering logs¶
Log buffering enables you to buffer logs for a specific request or invocation. Enable log buffering by configuring the BufferingAppender
in your logging configuration. You can buffer logs at the WARNING
, INFO
or DEBUG
level, and flush them automatically on error or manually as needed.
This is useful when you want to reduce the number of log messages emitted while still having detailed logs when needed, such as when troubleshooting issues.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 |
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 |
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 |
|
Configuring the buffer¶
When configuring log buffering, you have options to fine-tune how logs are captured, stored, and emitted. You can configure the following parameters in the BufferingAppender
configuration:
Parameter | Description | Configuration |
---|---|---|
maxBytes |
Maximum size of the log buffer in bytes | int (default: 20480 bytes) |
bufferAtVerbosity |
Minimum log level to buffer | DEBUG (default), INFO , WARNING |
flushOnErrorLog |
Automatically flush buffer when ERROR or FATAL level logs are emitted |
true (default), false |
Logger Level Configuration
To use log buffering effectively, you must set your logger levels to the same level as bufferAtVerbosity
or more verbose for the logging framework to capture and forward logs to the BufferingAppender
. For example, if you want to buffer DEBUG
level logs and emit INFO
+ level logs directly, you must:
- Set your logger levels to
DEBUG
in your log4j2.xml or logback.xml configuration - Set
POWERTOOLS_LOG_LEVEL=DEBUG
if using the environment variable (see Log level section for more details)
If you want to sample INFO
and WARNING
logs but not DEBUG
logs, set your log level to INFO
and bufferAtVerbosity
to WARNING
. This allows you to define the lower and upper bounds for buffering. All logs with a more severe level than bufferAtVerbosity
will be emitted directly.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 |
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 |
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 |
|
Disabling flushOnErrorLog
will not flush the buffer when logging an error. This is useful when you want to control when the buffer is flushed by calling the flush method manually.
Manual buffer control¶
You can manually control the log buffer using the PowertoolsLogging
utility class, which provides a backend-independent API that works with both Log4j2 and Logback:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 |
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 |
|
Available methods:
PowertoolsLogging.flushBuffer()
- Outputs all buffered logs and clears the bufferPowertoolsLogging.clearBuffer()
- Discards all buffered logs without outputting them
Flushing on exceptions¶
Use the @Logging
annotation to automatically flush buffered logs when an uncaught exception is raised in your Lambda function. This is enabled by default (flushBufferOnUncaughtError = true
), but you can explicitly configure it if needed.
1 2 3 4 5 6 7 8 9 10 11 12 13 |
|
Buffering workflows¶
Manual flush¶
sequenceDiagram
participant Client
participant Lambda
participant Logger
participant CloudWatch
Client->>Lambda: Invoke Lambda
Lambda->>Logger: Initialize with DEBUG level buffering
Logger-->>Lambda: Logger buffer ready
Lambda->>Logger: logger.debug("First debug log")
Logger-->>Logger: Buffer first debug log
Lambda->>Logger: logger.info("Info log")
Logger->>CloudWatch: Directly log info message
Lambda->>Logger: logger.debug("Second debug log")
Logger-->>Logger: Buffer second debug log
Lambda->>Logger: Manual flush call
Logger->>CloudWatch: Emit buffered logs to stdout
Lambda->>Client: Return execution result
Flushing buffer manually
Flushing when logging an error¶
sequenceDiagram
participant Client
participant Lambda
participant Logger
participant CloudWatch
Client->>Lambda: Invoke Lambda
Lambda->>Logger: Initialize with DEBUG level buffering
Logger-->>Lambda: Logger buffer ready
Lambda->>Logger: logger.debug("First log")
Logger-->>Logger: Buffer first debug log
Lambda->>Logger: logger.debug("Second log")
Logger-->>Logger: Buffer second debug log
Lambda->>Logger: logger.debug("Third log")
Logger-->>Logger: Buffer third debug log
Lambda->>Lambda: Exception occurs
Lambda->>Logger: logger.error("Error details")
Logger->>CloudWatch: Emit error log
Logger->>CloudWatch: Emit buffered debug logs
Lambda->>Client: Raise exception
Flushing buffer when an error happens
Flushing on exception¶
This works when using the @Logging
annotation which automatically clears the buffer at the end of method execution.
sequenceDiagram
participant Client
participant Lambda
participant Logger
participant CloudWatch
Client->>Lambda: Invoke Lambda
Lambda->>Logger: Using @Logging annotation
Logger-->>Lambda: Logger context injected
Lambda->>Logger: logger.debug("First log")
Logger-->>Logger: Buffer first debug log
Lambda->>Logger: logger.debug("Second log")
Logger-->>Logger: Buffer second debug log
Lambda->>Lambda: Uncaught Exception
Lambda->>CloudWatch: Automatically emit buffered debug logs
Lambda->>Client: Raise uncaught exception
Flushing buffer when an uncaught exception happens
Buffering FAQs¶
- Does the buffer persist across Lambda invocations? No, each Lambda invocation has its own buffer. The buffer is initialized when the Lambda function is invoked and is cleared after the function execution completes or when flushed manually.
- Are my logs buffered during cold starts (INIT phase)? No, we never buffer logs during cold starts. This is because we want to ensure that logs emitted during this phase are always available for debugging and monitoring purposes. The buffer is only used during the execution of the Lambda function.
- How can I prevent log buffering from consuming excessive memory? You can limit the size of the buffer by setting the
maxBytes
option in theBufferingAppender
configuration. This will ensure that the buffer does not grow indefinitely. - What happens if the log buffer reaches its maximum size? Older logs are removed from the buffer to make room for new logs. This means that if the buffer is full, you may lose some logs if they are not flushed before the buffer reaches its maximum size. When this happens, we emit a warning when flushing the buffer to indicate that some logs have been dropped.
- How is the log size of a log line calculated? The log size is calculated based on the size of the log line in bytes. This includes the size of the log message, any exception (if present), the log line location, additional keys, and the timestamp.
- What timestamp is used when I flush the logs? The timestamp is the original time when the log record was created. If you create a log record at 11:00:10 and flush it at 11:00:25, the log line will retain its original timestamp of 11:00:10.
- What happens if I try to add a log line that is bigger than max buffer size? The log will be emitted directly to standard output and not buffered. When this happens, we emit a warning to indicate that the log line was too big to be buffered.
- What happens if Lambda times out without flushing the buffer? Logs that are still in the buffer will be lost.
- How does the
BufferingAppender
work with different appenders? TheBufferingAppender
is designed to wrap arbitrary appenders, providing maximum flexibility. You can wrap console appenders, file appenders, or any custom appenders with buffering functionality.
Sampling debug logs¶
You can dynamically set a percentage of your logs toDEBUG
level to be included in the logger output, regardless of configured log leve, using thePOWERTOOLS_LOGGER_SAMPLE_RATE
environment variable or
via samplingRate
attribute on the @Logging
annotation.
Info
Configuration on environment variable is given precedence over sampling rate configuration on annotation, provided it's in valid value range.
1 2 3 4 5 6 7 8 9 10 |
|
1 2 3 4 5 6 7 8 |
|
Built-in Correlation ID expressions¶
You can use any of the following built-in JMESPath expressions as part of @Logging(correlationIdPath = ...)
:
Note: Any object key named with -
must be escaped
For example, request.headers."x-amzn-trace-id"
.
Name | Expression | Description |
---|---|---|
API_GATEWAY_REST | "requestContext.requestId" |
API Gateway REST API request ID |
API_GATEWAY_HTTP | "requestContext.requestId" |
API Gateway HTTP API request ID |
APPSYNC_RESOLVER | request.headers."x-amzn-trace-id" |
AppSync X-Ray Trace ID |
APPLICATION_LOAD_BALANCER | headers."x-amzn-trace-id" |
ALB X-Ray Trace ID |
EVENT_BRIDGE | "id" |
EventBridge Event ID |
Customising fields in logs¶
Powertools for AWS Lambda comes with default json structure (standard fields & lambda context fields).
You can go further and customize which fields you want to keep in your logs or not. The configuration varies according to the underlying logging library.
Log4j2 configuration¶
Log4j2 configuration is done in log4j2.xml and leverages JsonTemplateLayout
:
1 2 3 |
|
The JsonTemplateLayout
is automatically configured with the provided template:
LambdaJsonLayout.json
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 |
|
You can create your own template and leverage the PowertoolsResolver and any other resolver to log the desired fields with the desired format. Some examples of customization are given below:
Customising date format¶
Utility by default emits timestamp
field in the logs in format yyyy-MM-dd'T'HH:mm:ss.SSS'Z'
and in system default timezone.
If you need to customize format and timezone, you can update your template.json or by configuring log4j2.component.properties
as shown in examples below:
1 2 3 4 5 6 7 8 9 |
|
1 2 |
|
See TimestampResolver
documentation for more details.
Lambda Advanced Logging Controls date format
When using the Lambda ALC, you must have a date format compatible with the RFC3339
More customization¶
You can also customize how exceptions are logged, and much more. See the JSON Layout template documentation for more details.
Logback configuration¶
Logback configuration is done in logback.xml and the LambdaJsonEncoder
:
1 2 3 4 |
|
The LambdaJsonEncoder
can be customized in different ways:
Customising date format¶
Utility by default emits timestamp
field in the logs in format yyyy-MM-dd'T'HH:mm:ss.SSS'Z'
and in system default timezone.
If you need to customize format and timezone, you can change use the following:
1 2 3 4 |
|
More customization¶
- You can use a standard
ThrowableHandlingConverter
to customize the exception format (default is no converter). Example:
1 2 3 4 5 6 7 8 9 10 11 12 |
|
- You can choose to add information about threads (default is
false
):
1 2 3 |
|
- You can even choose to remove Powertools information from the logs like function name, arn:
1 2 3 |
|
Elastic Common Schema (ECS) Support¶
Utility also supports Elastic Common Schema(ECS) format. The field emitted in logs will follow specs from ECS together with field captured by utility as mentioned above.
Log4j2 configuration¶
Use LambdaEcsLayout.json
as eventTemplateUri
when configuring JsonTemplateLayout
.
1 2 3 4 5 6 7 8 9 10 11 12 13 |
|
Logback configuration¶
Use the LambdaEcsEncoder
rather than the LambdaJsonEncoder
when configuring the appender:
1 2 3 4 5 6 7 8 9 |
|