Skip to content

Logging provides an opinionated logger with output structured as JSON.

Key features

  • Leverages standard logging libraries: SLF4J as the API, and log4j2 or logback for the implementation
  • Captures key fields from Lambda context, cold start and structures logging output as JSON
  • Optionally logs Lambda request
  • Optionally logs Lambda response
  • Optionally supports log sampling by including a configurable percentage of DEBUG logs in logging output
  • Allows additional keys to be appended to the structured log at any point in time

Getting started

Tip

You can find complete examples in the project repository.

Installation

Depending on preference, you must choose to use either log4j2 or logback as your log provider. In both cases you need to configure aspectj to weave the code and make sure the annotation is processed.

Maven

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
<dependencies>
    ...
    <dependency>
        <groupId>software.amazon.lambda</groupId>
        <artifactId>powertools-logging-log4j</artifactId>
        <version>2.0.0-SNAPSHOT</version>
    </dependency>
    ...
</dependencies>
...
<!-- configure the aspectj-maven-plugin to compile-time weave (CTW) the aws-lambda-powertools-java aspects into your project -->
<build>
    <plugins>
        ...
        <plugin>
             <groupId>dev.aspectj</groupId>
             <artifactId>aspectj-maven-plugin</artifactId>
             <version>1.13.1</version>
             <configuration>
                 <source>11</source> <!-- or higher -->
                 <target>11</target> <!-- or higher -->
                 <complianceLevel>11</complianceLevel> <!-- or higher -->
                 <aspectLibraries>
                     <aspectLibrary>
                         <groupId>software.amazon.lambda</groupId>
                         <artifactId>powertools-logging</artifactId>
                     </aspectLibrary>
                 </aspectLibraries>
             </configuration>
             <executions>
                 <execution>
                     <goals>
                         <goal>compile</goal>
                     </goals>
                 </execution>
             </executions>
        </plugin>
        ...
    </plugins>
</build>
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
<dependencies>
    ...
    <dependency>
        <groupId>software.amazon.lambda</groupId>
        <artifactId>powertools-logging-logback</artifactId>
        <version>2.0.0-SNAPSHOT</version>
    </dependency>
    ...
</dependencies>
...
<!-- configure the aspectj-maven-plugin to compile-time weave (CTW) the aws-lambda-powertools-java aspects into your project -->
<build>
    <plugins>
        ...
        <plugin>
             <groupId>dev.aspectj</groupId>
             <artifactId>aspectj-maven-plugin</artifactId>
             <version>1.13.1</version>
             <configuration>
                 <source>11</source> <!-- or higher -->
                 <target>11</target> <!-- or higher -->
                 <complianceLevel>11</complianceLevel> <!-- or higher -->
                 <aspectLibraries>
                     <aspectLibrary>
                         <groupId>software.amazon.lambda</groupId>
                         <artifactId>powertools-logging</artifactId>
                     </aspectLibrary>
                 </aspectLibraries>
             </configuration>
             <executions>
                 <execution>
                     <goals>
                         <goal>compile</goal>
                     </goals>
                 </execution>
             </executions>
        </plugin>
        ...
    </plugins>
</build>

Gradle

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
    plugins {
        id 'java'
        id 'io.freefair.aspectj.post-compile-weaving' version '8.1.0'
    }

    repositories {
        mavenCentral()
    }

    dependencies {
        aspect 'software.amazon.lambda:powertools-logging-log4j:2.0.0-SNAPSHOT'
    }

    sourceCompatibility = 11
    targetCompatibility = 11
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
    plugins {
        id 'java'
        id 'io.freefair.aspectj.post-compile-weaving' version '8.1.0'
    }

    repositories {
        mavenCentral()
    }

    dependencies {
        aspect 'software.amazon.lambda:powertools-logging-logback:2.0.0-SNAPSHOT'
    }

    sourceCompatibility = 11
    targetCompatibility = 11

Configuration

Main environment variables

The logging module requires two settings:

Environment variable Setting Description
POWERTOOLS_LOG_LEVEL Logging level Sets how verbose Logger should be. If not set, will use the Logging configuration
POWERTOOLS_SERVICE_NAME Service Sets service key that will be included in all log statements (Default is service_undefined)

Here is an example using AWS Serverless Application Model (SAM):

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
Resources:
  PaymentFunction:
    Type: AWS::Serverless::Function
    Properties:
      MemorySize: 512
      Timeout: 20
      Runtime: java17
      Environment:
        Variables:
          POWERTOOLS_LOG_LEVEL: WARN
          POWERTOOLS_SERVICE_NAME: payment

There are some other environment variables which can be set to modify Logging's settings at a global scope:

Environment variable Type Description
POWERTOOLS_LOGGER_SAMPLE_RATE float Configure the sampling rate at which DEBUG logs should be included. See sampling rate
POWERTOOLS_LOG_EVENT boolean Specify if the incoming Lambda event should be logged. See Logging event
POWERTOOLS_LOG_RESPONSE boolean Specify if the Lambda response should be logged. See logging response
POWERTOOLS_LOG_ERROR boolean Specify if a Lambda uncaught exception should be logged. See logging exception

Logging configuration

Powertools for AWS Lambda (Java) simply extends the functionality of the underlying library you choose (log4j2 or logback). You can leverage the standard configuration files (log4j2.xml or logback.xml):

With log4j2, we leverage the JsonTemplateLayout to provide structured logging. A default template is provided in powertools (LambdaJsonLayout.json):

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
<?xml version="1.0" encoding="UTF-8"?>
<Configuration>
    <Appenders>
        <Console name="JsonAppender" target="SYSTEM_OUT">
            <JsonTemplateLayout eventTemplateUri="classpath:LambdaJsonLayout.json" />
        </Console>
    </Appenders>
    <Loggers>
        <Logger name="com.example" level="debug" additivity="false">
            <AppenderRef ref="JsonAppender"/>
        </Logger>
        <Root level="info">
            <AppenderRef ref="JsonAppender"/>
        </Root>
    </Loggers>
</Configuration>

With logback, we leverage a custom Encoder to provide structured logging:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
<?xml version="1.0" encoding="UTF-8"?>
<configuration>
    <appender name="console" class="ch.qos.logback.core.ConsoleAppender">
        <encoder class="software.amazon.lambda.powertools.logging.logback.LambdaJsonEncoder">
        </encoder>
    </appender>
    <logger name="com.example" level="DEBUG" additivity="false">
        <appender-ref ref="console" />
    </logger>
    <root level="INFO">
        <appender-ref ref="console" />
    </root>
</configuration>

Log level

Log level is generally configured in the log4j2.xml or logback.xml. But this level is static and needs a redeployment of the function to be changed. Powertools for AWS Lambda permits to change this level dynamically thanks to an environment variable POWERTOOLS_LOG_LEVEL.

We support the following log levels (SLF4J levels): TRACE, DEBUG, INFO, WARN, ERROR. If the level is set to CRITICAL (supported in log4j but not logback), we revert it back to ERROR. If the level is set to any other value, we set it to the default value (INFO).

AWS Lambda Advanced Logging Controls (ALC)

When is it useful?

When you want to set a logging policy to drop informational or verbose logs for one or all AWS Lambda functions, regardless of runtime and logger used.

With AWS Lambda Advanced Logging Controls (ALC), you can enforce a minimum log level that Lambda will accept from your application code.

When enabled, you should keep Powertools and ALC log level in sync to avoid data loss.

Here's a sequence diagram to demonstrate how ALC will drop both INFO and DEBUG logs emitted from Logger, when ALC log level is stricter than Logger.

sequenceDiagram
    participant Lambda service
    participant Lambda function
    participant Application Logger

    Note over Lambda service: AWS_LAMBDA_LOG_LEVEL="WARN"
    Note over Application Logger: POWERTOOLS_LOG_LEVEL="DEBUG"

    Lambda service->>Lambda function: Invoke (event)
    Lambda function->>Lambda function: Calls handler
    Lambda function->>Application Logger: logger.error("Something happened")
    Lambda function-->>Application Logger: logger.debug("Something happened")
    Lambda function-->>Application Logger: logger.info("Something happened")
    Lambda service--xLambda service: DROP INFO and DEBUG logs
    Lambda service->>CloudWatch Logs: Ingest error logs

Priority of log level settings in Powertools for AWS Lambda

We prioritise log level settings in this order:

  1. AWS_LAMBDA_LOG_LEVEL environment variable
  2. POWERTOOLS_LOG_LEVEL environment variable
  3. level defined in the log4j2.xml or logback.xml files

If you set Powertools level lower than ALC, we will emit a warning informing you that your messages will be discarded by Lambda.

NOTE

With ALC enabled, we are unable to increase the minimum log level below the AWS_LAMBDA_LOG_LEVEL environment variable value, see AWS Lambda service documentation for more details.

Basic Usage

To use Lambda Powertools for AWS Lambda Logging, use the @Logging annotation in your code and the standard SLF4J logger:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import software.amazon.lambda.powertools.logging.Logging;
// ... other imports

public class PaymentFunction implements RequestHandler<APIGatewayProxyRequestEvent, APIGatewayProxyResponseEvent> {

    private static final Logger LOGGER = LoggerFactory.getLogger(PaymentFunction.class);

    @Logging
    public APIGatewayProxyResponseEvent handleRequest(final APIGatewayProxyRequestEvent input, final Context context) {
        LOGGER.info("Collecting payment");
        // ...
        LOGGER.debug("order={}, amount={}", order.getId(), order.getAmount());
        // ...
    }
}

Standard structured keys

Your logs will always include the following keys in your structured logging:

Key Type Example Description
timestamp String "2023-12-01T14:49:19.293Z" Timestamp of actual log statement, by default uses default AWS Lambda timezone (UTC)
level String "INFO" Logging level (any level supported by SLF4J (i.e. TRACE, DEBUG, INFO, WARN, ERROR)
service String "payment" Service name defined, by default service_undefined
sampling_rate float 0.1 Debug logging sampling rate in percentage e.g. 10% in this case (logged if not 0)
message String "Collecting payment" Log statement value. Unserializable JSON values will be casted to string
xray_trace_id String "1-5759e988-bd862e3fe1be46a994272793" X-Ray Trace ID when Tracing is enabled
error Map { "name": "InvalidAmountException", "message": "Amount must be superior to 0", "stack": "at..." } Eventual exception (e.g. when doing logger.error("Error", new InvalidAmountException("Amount must be superior to 0"));)

Additional structured keys

Logging Lambda context information

The following keys will also be added to all your structured logs (unless configured otherwise):

Key Type Example Description
cold_start Boolean false ColdStart value
function_name String "example-PaymentFunction-1P1Z6B39FLU73" Name of the function
function_version String "12" Version of the function
function_memory_size String "512" Memory configure for the function
function_arn String "arn:aws:lambda:eu-west-1:012345678910:function:example-PaymentFunction-1P1Z6B39FLU73" ARN of the function
function_request_id String "899856cb-83d1-40d7-8611-9e78f15f32f4"" AWS Request ID from lambda context

Logging additional keys

Logging a correlation ID

You can set a correlation ID using the correlationIdPath attribute of the @Loggingannotation, by passing a JMESPath expression, including our custom JMESPath Functions.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
public class AppCorrelationIdPath implements RequestHandler<APIGatewayProxyRequestEvent, APIGatewayProxyResponseEvent> {

    private static final Logger LOGGER = LoggerFactory.getLogger(AppCorrelationIdPath.class);

    @Logging(correlationIdPath = "headers.my_request_id_header")
    public APIGatewayProxyResponseEvent handleRequest(final APIGatewayProxyRequestEvent input, final Context context) {
        // ...
        LOGGER.info("Collecting payment")
        // ...
    }
}
1
2
3
4
5
{
  "headers": {
    "my_request_id_header": "correlation_id_value"
  }
}
1
2
3
4
5
6
7
{
    "level": "INFO",
    "message": "Collecting payment",
    "timestamp": "2023-12-01T14:49:19.293Z",
    "service": "payment",
    "correlation_id": "correlation_id_value"
}

Known correlation IDs

To ease routine tasks like extracting correlation ID from popular event sources, we provide built-in JMESPath expressions.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
import software.amazon.lambda.powertools.logging.CorrelationIdPaths;

public class AppCorrelationId implements RequestHandler<APIGatewayProxyRequestEvent, APIGatewayProxyResponseEvent> {

    private static final Logger LOGGER = LoggerFactory.getLogger(AppCorrelationId.class);

    @Logging(correlationIdPath = CorrelationIdPaths.API_GATEWAY_REST)
    public APIGatewayProxyResponseEvent handleRequest(final APIGatewayProxyRequestEvent input, final Context context) {
        // ...
        LOGGER.info("Collecting payment")
        // ...
    }
}
1
2
3
4
5
{
    "requestContext": {
        "requestId": "correlation_id_value"
    }
}
1
2
3
4
5
6
7
{
    "level": "INFO",
    "message": "Collecting payment",
    "timestamp": "2023-12-01T14:49:19.293Z",
    "service": "payment",
    "correlation_id": "correlation_id_value"
}

Custom keys

Using StructuredArguments

To append additional keys in your logs, you can use the StructuredArguments class:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
import static software.amazon.lambda.powertools.logging.argument.StructuredArguments.entry;
import static software.amazon.lambda.powertools.logging.argument.StructuredArguments.entries;

public class PaymentFunction implements RequestHandler<APIGatewayProxyRequestEvent, APIGatewayProxyResponseEvent> {

    private static final Logger LOGGER = LoggerFactory.getLogger(AppLogResponse.class);

    @Logging
    public APIGatewayProxyResponseEvent handleRequest(final APIGatewayProxyRequestEvent input, final Context context) {
        // ...
        LOGGER.info("Collecting payment", entry("orderId", order.getId()));

        // ...
        Map<String, String> customKeys = new HashMap<>();
        customKeys.put("paymentId", payment.getId());
        customKeys.put("amount", payment.getAmount);
        LOGGER.info("Payment successful", entries(customKeys));
    }
}
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
{
    "level": "INFO",
    "message": "Collecting payment",
    "service": "payment",
    "timestamp": "2023-12-01T14:49:19.293Z",
    "xray_trace_id": "1-6569f266-4b0c7f97280dcd8428d3c9b5",
    "orderId": "41376"
}
...
{
    "level": "INFO",
    "message": "Payment successful",
    "service": "payment",
    "timestamp": "2023-12-01T14:49:20.118Z",
    "xray_trace_id": "1-6569f266-4b0c7f97280dcd8428d3c9b5",
    "orderId": "41376",
    "paymentId": "3245",
    "amount": 345.99
}

StructuredArguments provides several options:

  • entry to add one key and value into the log structure. Note that value can be any object type.
  • entries to add multiple keys and values (from a Map) into the log structure. Note that values can be any object type.
  • json to add a key and raw json (string) as value into the log structure.
  • array to add one key and multiple values into the log structure. Note that values can be any object type.
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
import static software.amazon.lambda.powertools.logging.argument.StructuredArguments.entry;
import static software.amazon.lambda.powertools.logging.argument.StructuredArguments.array;

public class OrderFunction implements RequestHandler<APIGatewayProxyRequestEvent, APIGatewayProxyResponseEvent> {

    private static final Logger LOGGER = LoggerFactory.getLogger(AppLogResponse.class);

    @Logging
    public APIGatewayProxyResponseEvent handleRequest(final APIGatewayProxyRequestEvent input, final Context context) {
        // ...
        LOGGER.info("Processing order", entry("order", order), array("products", productList));
        // ...
    }
}
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
{
    "level": "INFO",
    "message": "Processing order",
    "service": "payment",
    "timestamp": "2023-12-01T14:49:19.293Z",
    "xray_trace_id": "1-6569f266-4b0c7f97280dcd8428d3c9b5",
    "order": {
        "orderId": 23542,
        "amount": 459.99,
        "date": "2023-12-01T14:49:19.018Z",
        "customerId": 328496
    },
    "products": [
        {
            "productId": 764330,
            "name": "product1",
            "quantity": 1,
            "price": 300
        },
        {
            "productId": 798034,
            "name": "product42",
            "quantity": 1,
            "price": 159.99
        }
    ]
}
Use arguments without log placeholders

As shown in the example above, you can use arguments (with StructuredArguments) without placeholders ({}) in the message. If you add the placeholders, the arguments will be logged both as an additional field and also as a string in the log message, using the toString() method.

1
LOGGER.info("Processing {}", entry("order", order));
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
public class Order {
    // ...        

    @Override
    public String toString() {
        return "Order{" +
                "orderId=" + id +
                ", amount=" + amount +
                ", date='" + date + '\'' +
                ", customerId=" + customerId +
                '}';
    }
}
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
{
    "level": "INFO",
    "message": "Processing order=Order{orderId=23542, amount=459.99, date='2023-12-01T14:49:19.018Z', customerId=328496}",
    "service": "payment",
    "timestamp": "2023-12-01T14:49:19.293Z",
    "xray_trace_id": "1-6569f266-4b0c7f97280dcd8428d3c9b5",
    "order": {
        "orderId": 23542,
        "amount": 459.99,
        "date": "2023-12-01T14:49:19.018Z",
        "customerId": 328496
    }
}

You can also combine structured arguments with non structured ones. For example:

1
LOGGER.info("Processing order {}", order.getOrderId(), entry("order", order));
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
{
        "level": "INFO",
        "message": "Processing order 23542",
        "service": "payment",
        "timestamp": "2023-12-01T14:49:19.293Z",
        "xray_trace_id": "1-6569f266-4b0c7f97280dcd8428d3c9b5",
        "order": {
            "orderId": 23542,
            "amount": 459.99,
            "date": "2023-12-01T14:49:19.018Z",
            "customerId": 328496
        }
    }

Using MDC

Mapped Diagnostic Context (MDC) is essentially a Key-Value store. It is supported by the SLF4J API, logback and log4j (known as ThreadContext). You can use the following standard:

MDC.put("key", "value");

Custom keys stored in the MDC are persisted across warm invocations

Always set additional keys as part of your handler method to ensure they have the latest value, or explicitly clear them with clearState=true.

Removing additional keys

You can remove additional keys added with the MDC using MDC.remove("key").

Clearing state

Logger is commonly initialized in the global scope. Due to Lambda Execution Context reuse, this means that custom keys, added with the MDC can be persisted across invocations. If you want all custom keys to be deleted, you can use clearState=true attribute on the @Logging annotation.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
public class CreditCardFunction implements RequestHandler<APIGatewayProxyRequestEvent, APIGatewayProxyResponseEvent> {

    private static final Logger LOGGER = LoggerFactory.getLogger(CreditCardFunction.class);

    @Logging(clearState = true)
    public APIGatewayProxyResponseEvent handleRequest(final APIGatewayProxyRequestEvent input, final Context context) {
        // ...
        MDC.put("cardNumber", card.getId());
        LOGGER.info("Updating card information");
        // ...
    }
}
1
2
3
4
5
6
7
8
{
  "level": "INFO",
  "message": "Updating card information",
  "service": "card",
  "timestamp": "2023-12-01T14:49:19.293Z",
  "xray_trace_id": "1-6569f266-4b0c7f97280dcd8428d3c9b5",
  "cardNumber": "6818 8419 9395 5322"
}
1
2
3
4
5
6
7
8
{
  "level": "INFO",
  "message": "Updating card information",
  "service": "card",
  "timestamp": "2023-12-01T14:49:20.213Z",
  "xray_trace_id": "2-7a518f43-5e9d2b1f6cfd5e8b3a4e1f9c",
  "cardNumber": "7201 6897 6685 3285"
}

clearState is based on MDC.clear(). State clearing is automatically done at the end of the execution of the handler if set to true.

Logging incoming event

When debugging in non-production environments, you can instruct the @Logging annotation to log the incoming event with logEvent param or via POWERTOOLS_LOGGER_LOG_EVENT env var.

Warning

This is disabled by default to prevent sensitive info being logged

1
2
3
4
5
6
7
8
9
   public class AppLogEvent implements RequestHandler<APIGatewayProxyRequestEvent, APIGatewayProxyResponseEvent> {

    private static final Logger LOGGER = LoggerFactory.getLogger(AppLogEvent.class);

    @Logging(logEvent = true)
    public APIGatewayProxyResponseEvent handleRequest(final APIGatewayProxyRequestEvent input, final Context context) {
        // ...
    }
}
Note

If you use this on a RequestStreamHandler, Powertools must duplicate input streams in order to log them.

Logging handler response

When debugging in non-production environments, you can instruct the @Logging annotation to log the response with logResponse param or via POWERTOOLS_LOGGER_LOG_RESPONSE env var.

Warning

This is disabled by default to prevent sensitive info being logged

1
2
3
4
5
6
7
8
9
public class AppLogResponse implements RequestHandler<APIGatewayProxyRequestEvent, APIGatewayProxyResponseEvent> {

    private static final Logger LOGGER = LoggerFactory.getLogger(AppLogResponse.class);

    @Logging(logResponse = true)
    public APIGatewayProxyResponseEvent handleRequest(final APIGatewayProxyRequestEvent input, final Context context) {
        // ...
    }
}
Note

If you use this on a RequestStreamHandler, Powertools must duplicate output streams in order to log them.

Logging handler uncaught exception

By default, AWS Lambda logs any uncaught exception that might happen in the handler. However, this log is not structured and does not contain any additional context. You can instruct the @Logging annotation to log this kind of exception with logError param or via POWERTOOLS_LOGGER_LOG_ERROR env var.

Warning

This is disabled by default to prevent double logging

1
2
3
4
5
6
7
8
9
public class AppLogError implements RequestHandler<APIGatewayProxyRequestEvent, APIGatewayProxyResponseEvent> {

    private static final Logger LOGGER = LoggerFactory.getLogger(AppLogError.class);

    @Logging(logError = true)
    public APIGatewayProxyResponseEvent handleRequest(final APIGatewayProxyRequestEvent input, final Context context) {
        // ...
    }
}

Advanced

Sampling debug logs

You can dynamically set a percentage of your logs toDEBUG level to be included in the logger output, regardless of configured log leve, using thePOWERTOOLS_LOGGER_SAMPLE_RATE environment variable or via samplingRate attribute on the @Logging annotation.

Info

Configuration on environment variable is given precedence over sampling rate configuration on annotation, provided it's in valid value range.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
public class App implements RequestHandler<APIGatewayProxyRequestEvent, APIGatewayProxyResponseEvent> {

    private static final Logger LOGGER = LoggerFactory.getLogger(App.class);

    @Logging(samplingRate = 0.5)
    public APIGatewayProxyResponseEvent handleRequest(final APIGatewayProxyRequestEvent input, final Context context) {
        // will eventually be logged based on the sampling rate
        LOGGER.debug("Handle payment");
    }
}
1
2
3
4
5
6
7
8
Resources:
  PaymentFunction:
    Type: AWS::Serverless::Function
    Properties:
      ...
      Environment:
        Variables:
          POWERTOOLS_LOGGER_SAMPLE_RATE: 0.5

Built-in Correlation ID expressions

You can use any of the following built-in JMESPath expressions as part of @Logging(correlationIdPath = ...):

Note: Any object key named with - must be escaped

For example, request.headers."x-amzn-trace-id".

Name Expression Description
API_GATEWAY_REST "requestContext.requestId" API Gateway REST API request ID
API_GATEWAY_HTTP "requestContext.requestId" API Gateway HTTP API request ID
APPSYNC_RESOLVER request.headers."x-amzn-trace-id" AppSync X-Ray Trace ID
APPLICATION_LOAD_BALANCER headers."x-amzn-trace-id" ALB X-Ray Trace ID
EVENT_BRIDGE "id" EventBridge Event ID

Customising fields in logs

Powertools for AWS Lambda comes with default json structure (standard fields & lambda context fields).

You can go further and customize which fields you want to keep in your logs or not. The configuration varies according to the underlying logging library.

Log4j2 configuration

Log4j2 configuration is done in log4j2.xml and leverages JsonTemplateLayout:

1
2
3
    <Console name="console" target="SYSTEM_OUT">
        <JsonTemplateLayout eventTemplateUri="classpath:LambdaJsonLayout.json" />
    </Console>

The JsonTemplateLayout is automatically configured with the provided template:

LambdaJsonLayout.json
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
{
    "level": {
        "$resolver": "level",
        "field": "name"
    },
    "message": {
        "$resolver": "powertools",
        "field": "message"
    },
    "error": {
        "message": {
            "$resolver": "exception",
            "field": "message"
        },
        "name": {
            "$resolver": "exception",
            "field": "className"
        },
        "stack": {
            "$resolver": "exception",
            "field": "stackTrace",
            "stackTrace": {
                "stringified": true
            }
        }
    },
    "cold_start": {
        "$resolver": "powertools",
        "field": "cold_start"
    },
    "function_arn": {
        "$resolver": "powertools",
        "field": "function_arn"
    },
    "function_memory_size": {
        "$resolver": "powertools",
        "field": "function_memory_size"
    },
    "function_name": {
        "$resolver": "powertools",
        "field": "function_name"
    },
    "function_request_id": {
        "$resolver": "powertools",
        "field": "function_request_id"
    },
    "function_version": {
        "$resolver": "powertools",
        "field": "function_version"
    },
    "sampling_rate": {
        "$resolver": "powertools",
        "field": "sampling_rate"
    },
    "service": {
        "$resolver": "powertools",
        "field": "service"
    },
    "timestamp": {
        "$resolver": "timestamp",
        "pattern": {
            "format": "yyyy-MM-dd'T'HH:mm:ss.SSS'Z'"
        }
    },
    "xray_trace_id": {
        "$resolver": "powertools",
        "field": "xray_trace_id"
    },
    "": {
        "$resolver": "powertools"
    }
}

You can create your own template and leverage the PowertoolsResolver and any other resolver to log the desired fields with the desired format. Some examples of customization are given below:

Customising date format

Utility by default emits timestamp field in the logs in format yyyy-MM-dd'T'HH:mm:ss.SSS'Z' and in system default timezone. If you need to customize format and timezone, you can update your template.json or by configuring log4j2.component.properties as shown in examples below:

1
2
3
4
5
6
7
8
9
{
    "timestamp": {
        "$resolver": "timestamp",
        "pattern": {
            "format": "yyyy-MM-dd HH:mm:ss",
            "timeZone": "Europe/Paris",
        }
    },
}
1
2
log4j.layout.jsonTemplate.timestampFormatPattern=yyyy-MM-dd'T'HH:mm:ss.SSSZz
log4j.layout.jsonTemplate.timeZone=Europe/Oslo

See TimestampResolver documentation for more details.

Lambda Advanced Logging Controls date format

When using the Lambda ALC, you must have a date format compatible with the RFC3339

More customization

You can also customize how exceptions are logged, and much more. See the JSON Layout template documentation for more details.

Logback configuration

Logback configuration is done in logback.xml and the Powertools LambdaJsonEncoder:

1
2
3
4
    <appender name="console" class="ch.qos.logback.core.ConsoleAppender">
        <encoder class="software.amazon.lambda.powertools.logging.logback.LambdaJsonEncoder">
        </encoder>
    </appender>

The LambdaJsonEncoder can be customized in different ways:

Customising date format

Utility by default emits timestamp field in the logs in format yyyy-MM-dd'T'HH:mm:ss.SSS'Z' and in system default timezone. If you need to customize format and timezone, you can change use the following:

1
2
3
4
    <encoder class="software.amazon.lambda.powertools.logging.logback.LambdaJsonEncoder">
        <timestampFormat>yyyy-MM-dd HH:mm:ss</timestampFormat>
        <timestampFormatTimezoneId>Europe/Paris</timestampFormatTimezoneId>
    </encoder>

More customization

  • You can use a standard ThrowableHandlingConverter to customize the exception format (default is no converter). Example:
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
    <encoder class="software.amazon.lambda.powertools.logging.logback.LambdaJsonEncoder">
        <throwableConverter class="net.logstash.logback.stacktrace.ShortenedThrowableConverter">
            <maxDepthPerThrowable>30</maxDepthPerThrowable>
            <maxLength>2048</maxLength>
            <shortenedClassNameLength>20</shortenedClassNameLength>
            <exclude>sun\.reflect\..*\.invoke.*</exclude>
            <exclude>net\.sf\.cglib\.proxy\.MethodProxy\.invoke</exclude>
            <evaluator class="myorg.MyCustomEvaluator"/>
            <rootCauseFirst>true</rootCauseFirst>
            <inlineHash>true</inlineHash>
        </throwableConverter>
    </encoder>
  • You can choose to add information about threads (default is false):
1
2
3
    <encoder class="software.amazon.lambda.powertools.logging.logback.LambdaJsonEncoder">
        <includeThreadInfo>true</includeThreadInfo>
    </encoder>
  • You can even choose to remove Powertools information from the logs like function name, arn:
1
2
3
    <encoder class="software.amazon.lambda.powertools.logging.logback.LambdaJsonEncoder">
        <includePowertoolsInfo>false</includePowertoolsInfo>
    </encoder>

Elastic Common Schema (ECS) Support

Utility also supports Elastic Common Schema(ECS) format. The field emitted in logs will follow specs from ECS together with field captured by utility as mentioned above.

Log4j2 configuration

Use LambdaEcsLayout.json as eventTemplateUri when configuring JsonTemplateLayout.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
<?xml version="1.0" encoding="UTF-8"?>
<Configuration>
    <Appenders>
        <Console name="JsonAppender" target="SYSTEM_OUT">
            <JsonTemplateLayout eventTemplateUri="classpath:LambdaEcsLayout.json" />
        </Console>
    </Appenders>
    <Loggers>
        <Root level="info">
            <AppenderRef ref="JsonAppender"/>
        </Root>
    </Loggers>
</Configuration>

Logback configuration

Use the LambdaEcsEncoder rather than the LambdaJsonEncoder when configuring the appender:

1
2
3
4
5
6
7
8
9
<configuration>
    <appender name="console" class="ch.qos.logback.core.ConsoleAppender">
        <encoder class="software.amazon.lambda.powertools.logging.logback.LambdaEcsEncoder">
        </encoder>
    </appender>
    <root level="INFO">
        <appender-ref ref="console" />
    </root>
</configuration>