Skip to content

Metrics

Metrics creates custom metrics asynchronously by logging metrics to standard output following Amazon CloudWatch Embedded Metric Format (EMF).

These metrics can be visualized through Amazon CloudWatch Console.

Key features

  • Aggregate up to 100 metrics using a single CloudWatch EMF object (large JSON blob)
  • Validate against common metric definitions mistakes (metric unit, values, max dimensions, max metrics, etc)
  • Metrics are created asynchronously by CloudWatch service, no custom stacks needed
  • Context manager to create a one off metric with a different dimension

Terminologies

If you're new to Amazon CloudWatch, there are two terminologies you must be aware of before using this utility:

  • Namespace. It's the highest level container that will group multiple metrics from multiple services for a given application, for example ServerlessEcommerce.
  • Dimensions. Metrics metadata in key-value format. They help you slice and dice metrics visualization, for example ColdStart metric by Payment service.
Metric terminology, visually explained

Getting started

Tip

All examples shared in this documentation are available within the project repository.

Metric has two global settings that will be used across all metrics emitted:

Setting Description Environment variable Constructor parameter
Metric namespace Logical container where all metrics will be placed e.g. ServerlessAirline POWERTOOLS_METRICS_NAMESPACE namespace
Service Optionally, sets service metric dimension across all metrics e.g. payment POWERTOOLS_SERVICE_NAME service
Tip

Use your application or main service as the metric namespace to easily group all metrics.

AWS Serverless Application Model (SAM) example
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
AWSTemplateFormatVersion: "2010-09-09"
Transform: AWS::Serverless-2016-10-31
Description: AWS Lambda Powertools Metrics doc examples

Globals:
  Function:
    Timeout: 5
    Runtime: python3.9
    Tracing: Active
    Environment:
      Variables:
        POWERTOOLS_SERVICE_NAME: booking
        POWERTOOLS_METRICS_NAMESPACE: ServerlessAirline

    Layers:
      # Find the latest Layer version in the official documentation
      # https://awslabs.github.io/aws-lambda-powertools-python/latest/#lambda-layer
      - !Sub arn:aws:lambda:${AWS::Region}:017000801446:layer:AWSLambdaPowertoolsPython:21

Resources:
  CaptureLambdaHandlerExample:
    Type: AWS::Serverless::Function
    Properties:
      CodeUri: ../src
      Handler: capture_lambda_handler.handler
Note

For brevity, all code snippets in this page will rely on environment variables above being set.

This ensures we instantiate metrics = Metrics() over metrics = Metrics(service="booking", namespace="ServerlessAirline"), etc.

Creating metrics

You can create metrics using add_metric, and you can create dimensions for all your aggregate metrics using add_dimension method.

Tip

You can initialize Metrics in any other module too. It'll keep track of your aggregate metrics in memory to optimize costs (one blob instead of multiples).

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
from aws_lambda_powertools import Metrics
from aws_lambda_powertools.metrics import MetricUnit
from aws_lambda_powertools.utilities.typing import LambdaContext

metrics = Metrics()


@metrics.log_metrics  # ensures metrics are flushed upon request completion/failure
def lambda_handler(event: dict, context: LambdaContext):
    metrics.add_metric(name="SuccessfulBooking", unit=MetricUnit.Count, value=1)
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
import os

from aws_lambda_powertools import Metrics
from aws_lambda_powertools.metrics import MetricUnit
from aws_lambda_powertools.utilities.typing import LambdaContext

STAGE = os.getenv("STAGE", "dev")
metrics = Metrics()


@metrics.log_metrics  # ensures metrics are flushed upon request completion/failure
def lambda_handler(event: dict, context: LambdaContext):
    metrics.add_dimension(name="environment", value=STAGE)
    metrics.add_metric(name="SuccessfulBooking", unit=MetricUnit.Count, value=1)
Tip: Autocomplete Metric Units

MetricUnit enum facilitate finding a supported metric unit by CloudWatch. Alternatively, you can pass the value as a string if you already know them e.g. unit="Count".

Note: Metrics overflow

CloudWatch EMF supports a max of 100 metrics per batch. Metrics utility will flush all metrics when adding the 100th metric. Subsequent metrics (101th+) will be aggregated into a new EMF object, for your convenience.

Warning: Do not create metrics or dimensions outside the handler

Metrics or dimensions added in the global scope will only be added during cold start. Disregard if you that's the intended behavior.

Adding multi-value metrics

You can call add_metric() with the same metric name multiple times. The values will be grouped together in a list.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
import os

from aws_lambda_powertools import Metrics
from aws_lambda_powertools.metrics import MetricUnit
from aws_lambda_powertools.utilities.typing import LambdaContext

STAGE = os.getenv("STAGE", "dev")
metrics = Metrics()


@metrics.log_metrics  # ensures metrics are flushed upon request completion/failure
def lambda_handler(event: dict, context: LambdaContext):
    metrics.add_dimension(name="environment", value=STAGE)
    metrics.add_metric(name="TurbineReads", unit=MetricUnit.Count, value=1)
    metrics.add_metric(name="TurbineReads", unit=MetricUnit.Count, value=8)
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
{
    "_aws": {
        "Timestamp": 1656685750622,
        "CloudWatchMetrics": [
            {
                "Namespace": "ServerlessAirline",
                "Dimensions": [
                    [
                        "environment",
                        "service"
                    ]
                ],
                "Metrics": [
                    {
                        "Name": "TurbineReads",
                        "Unit": "Count"
                    }
                ]
            }
        ]
    },
    "environment": "dev",
    "service": "booking",
    "TurbineReads": [
        1.0,
        8.0
    ]
}

Adding default dimensions

You can use set_default_dimensions method, or default_dimensions parameter in log_metrics decorator, to persist dimensions across Lambda invocations.

If you'd like to remove them at some point, you can use clear_default_dimensions method.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
import os

from aws_lambda_powertools import Metrics
from aws_lambda_powertools.metrics import MetricUnit
from aws_lambda_powertools.utilities.typing import LambdaContext

STAGE = os.getenv("STAGE", "dev")
metrics = Metrics()
metrics.set_default_dimensions(environment=STAGE, another="one")


@metrics.log_metrics  # ensures metrics are flushed upon request completion/failure
def lambda_handler(event: dict, context: LambdaContext):
    metrics.add_metric(name="TurbineReads", unit=MetricUnit.Count, value=1)
    metrics.add_metric(name="TurbineReads", unit=MetricUnit.Count, value=8)
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
import os

from aws_lambda_powertools import Metrics
from aws_lambda_powertools.metrics import MetricUnit
from aws_lambda_powertools.utilities.typing import LambdaContext

STAGE = os.getenv("STAGE", "dev")
metrics = Metrics()
DEFAULT_DIMENSIONS = {"environment": STAGE, "another": "one"}


# ensures metrics are flushed upon request completion/failure
@metrics.log_metrics(default_dimensions=DEFAULT_DIMENSIONS)
def lambda_handler(event: dict, context: LambdaContext):
    metrics.add_metric(name="TurbineReads", unit=MetricUnit.Count, value=1)
    metrics.add_metric(name="TurbineReads", unit=MetricUnit.Count, value=8)

Flushing metrics

As you finish adding all your metrics, you need to serialize and flush them to standard output. You can do that automatically with the log_metrics decorator.

This decorator also validates, serializes, and flushes all your metrics. During metrics validation, if no metrics are provided then a warning will be logged, but no exception will be raised.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
from aws_lambda_powertools import Metrics
from aws_lambda_powertools.metrics import MetricUnit
from aws_lambda_powertools.utilities.typing import LambdaContext

metrics = Metrics()


@metrics.log_metrics  # ensures metrics are flushed upon request completion/failure
def lambda_handler(event: dict, context: LambdaContext):
    metrics.add_metric(name="SuccessfulBooking", unit=MetricUnit.Count, value=1)
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
{
    "_aws": {
        "Timestamp": 1656686788803,
        "CloudWatchMetrics": [
            {
                "Namespace": "ServerlessAirline",
                "Dimensions": [
                    [
                        "service"
                    ]
                ],
                "Metrics": [
                    {
                        "Name": "SuccessfulBooking",
                        "Unit": "Count"
                    }
                ]
            }
        ]
    },
    "service": "booking",
    "SuccessfulBooking": [
        1.0
    ]
}
Tip: Metric validation

If metrics are provided, and any of the following criteria are not met, SchemaValidationError exception will be raised:

  • Maximum of 29 user-defined dimensions
  • Namespace is set, and no more than one
  • Metric units must be supported by CloudWatch

Raising SchemaValidationError on empty metrics

If you want to ensure at least one metric is always emitted, you can pass raise_on_empty_metrics to the log_metrics decorator:

Raising SchemaValidationError exception if no metrics are added
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
from aws_lambda_powertools.metrics import Metrics
from aws_lambda_powertools.utilities.typing import LambdaContext

metrics = Metrics()


@metrics.log_metrics(raise_on_empty_metrics=True)
def lambda_handler(event: dict, context: LambdaContext):
    # no metrics being created will now raise SchemaValidationError
    ...
Suppressing warning messages on empty metrics

If you expect your function to execute without publishing metrics every time, you can suppress the warning with warnings.filterwarnings("ignore", "No metrics to publish*").

Capturing cold start metric

You can optionally capture cold start metrics with log_metrics decorator via capture_cold_start_metric param.

1
2
3
4
5
6
7
8
9
from aws_lambda_powertools import Metrics
from aws_lambda_powertools.utilities.typing import LambdaContext

metrics = Metrics()


@metrics.log_metrics(capture_cold_start_metric=True)
def lambda_handler(event: dict, context: LambdaContext):
    ...
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
{
    "_aws": {
        "Timestamp": 1656687493142,
        "CloudWatchMetrics": [
            {
                "Namespace": "ServerlessAirline",
                "Dimensions": [
                    [
                        "function_name",
                        "service"
                    ]
                ],
                "Metrics": [
                    {
                        "Name": "ColdStart",
                        "Unit": "Count"
                    }
                ]
            }
        ]
    },
    "function_name": "test",
    "service": "booking",
    "ColdStart": [
        1.0
    ]
}

If it's a cold start invocation, this feature will:

  • Create a separate EMF blob solely containing a metric named ColdStart
  • Add function_name and service dimensions

This has the advantage of keeping cold start metric separate from your application metrics, where you might have unrelated dimensions.

Info

We do not emit 0 as a value for ColdStart metric for cost reasons. Let us know if you'd prefer a flag to override it.

Advanced

Adding metadata

You can add high-cardinality data as part of your Metrics log with add_metadata method. This is useful when you want to search highly contextual information along with your metrics in your logs.

Info

This will not be available during metrics visualization - Use dimensions for this purpose

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
from uuid import uuid4

from aws_lambda_powertools import Metrics
from aws_lambda_powertools.metrics import MetricUnit
from aws_lambda_powertools.utilities.typing import LambdaContext

metrics = Metrics()


@metrics.log_metrics
def lambda_handler(event: dict, context: LambdaContext):
    metrics.add_metric(name="SuccessfulBooking", unit=MetricUnit.Count, value=1)
    metrics.add_metadata(key="booking_id", value=f"{uuid4()}")
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
{
    "_aws": {
        "Timestamp": 1656688250155,
        "CloudWatchMetrics": [
            {
                "Namespace": "ServerlessAirline",
                "Dimensions": [
                    [
                        "service"
                    ]
                ],
                "Metrics": [
                    {
                        "Name": "SuccessfulBooking",
                        "Unit": "Count"
                    }
                ]
            }
        ]
    },
    "service": "booking",
    "booking_id": "00347014-341d-4b8e-8421-a89d3d588ab3",
    "SuccessfulBooking": [
        1.0
    ]
}

Single metric with a different dimension

CloudWatch EMF uses the same dimensions across all your metrics. Use single_metric if you have a metric that should have different dimensions.

Info

Generally, this would be an edge case since you pay for unique metric. Keep the following formula in mind:

unique metric = (metric_name + dimension_name + dimension_value)

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
import os

from aws_lambda_powertools import single_metric
from aws_lambda_powertools.metrics import MetricUnit
from aws_lambda_powertools.utilities.typing import LambdaContext

STAGE = os.getenv("STAGE", "dev")


def lambda_handler(event: dict, context: LambdaContext):
    with single_metric(name="MySingleMetric", unit=MetricUnit.Count, value=1) as metric:
        metric.add_dimension(name="environment", value=STAGE)
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
{
    "_aws": {
        "Timestamp": 1656689267834,
        "CloudWatchMetrics": [
            {
                "Namespace": "ServerlessAirline",
                "Dimensions": [
                    [
                        "environment",
                        "service"
                    ]
                ],
                "Metrics": [
                    {
                        "Name": "MySingleMetric",
                        "Unit": "Count"
                    }
                ]
            }
        ]
    },
    "environment": "dev",
    "service": "booking",
    "MySingleMetric": [
        1.0
    ]
}

Flushing metrics manually

If you prefer not to use log_metrics because you might want to encapsulate additional logic when doing so, you can manually flush and clear metrics as follows:

Warning

Metrics, dimensions and namespace validation still applies

Manually flushing and clearing metrics from memory
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
import os

from aws_lambda_powertools import single_metric
from aws_lambda_powertools.metrics import MetricUnit
from aws_lambda_powertools.utilities.typing import LambdaContext

STAGE = os.getenv("STAGE", "dev")


def lambda_handler(event: dict, context: LambdaContext):
    with single_metric(name="MySingleMetric", unit=MetricUnit.Count, value=1) as metric:
        metric.add_dimension(name="environment", value=STAGE)

Testing your code

Environment variables

Tip

Ignore this section, if:

  • You are explicitly setting namespace/default dimension via namespace and service parameters
  • You're not instantiating Metrics in the global namespace

For example, Metrics(namespace="ServerlessAirline", service="booking")

Make sure to set POWERTOOLS_METRICS_NAMESPACE and POWERTOOLS_SERVICE_NAME before running your tests to prevent failing on SchemaValidation exception. You can set it before you run tests or via pytest plugins like dotenv.

Injecting dummy Metric Namespace before running tests
1
POWERTOOLS_SERVICE_NAME="booking" POWERTOOLS_METRICS_NAMESPACE="ServerlessAirline" python -m pytest

Clearing metrics

Metrics keep metrics in memory across multiple instances. If you need to test this behavior, you can use the following Pytest fixture to ensure metrics are reset incl. cold start:

Clearing metrics between tests
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
import pytest

from aws_lambda_powertools import Metrics
from aws_lambda_powertools.metrics import metrics as metrics_global


@pytest.fixture(scope="function", autouse=True)
def reset_metric_set():
    # Clear out every metric data prior to every test
    metrics = Metrics()
    metrics.clear_metrics()
    metrics_global.is_cold_start = True  # ensure each test has cold start
    metrics.clear_default_dimensions()  # remove persisted default dimensions, if any
    yield

Functional testing

You can read standard output and assert whether metrics have been flushed. Here's an example using pytest with capsys built-in fixture:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
import json

import add_metrics


def test_log_metrics(capsys):
    add_metrics.lambda_handler({}, {})

    log = capsys.readouterr().out.strip()  # remove any extra line
    metrics_output = json.loads(log)  # deserialize JSON str

    # THEN we should have no exceptions
    # and a valid EMF object should be flushed correctly
    assert "SuccessfulBooking" in log  # basic string assertion in JSON str
    assert "SuccessfulBooking" in metrics_output["_aws"]["CloudWatchMetrics"][0]["Metrics"][0]["Name"]
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
from aws_lambda_powertools import Metrics
from aws_lambda_powertools.metrics import MetricUnit
from aws_lambda_powertools.utilities.typing import LambdaContext

metrics = Metrics()


@metrics.log_metrics  # ensures metrics are flushed upon request completion/failure
def lambda_handler(event: dict, context: LambdaContext):
    metrics.add_metric(name="SuccessfulBooking", unit=MetricUnit.Count, value=1)

This will be needed when using capture_cold_start_metric=True, or when both Metrics and single_metric are used.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
import json
from dataclasses import dataclass

import assert_multiple_emf_blobs_module
import pytest


@pytest.fixture
def lambda_context():
    @dataclass
    class LambdaContext:
        function_name: str = "test"
        memory_limit_in_mb: int = 128
        invoked_function_arn: str = "arn:aws:lambda:eu-west-1:809313241:function:test"
        aws_request_id: str = "52fdfc07-2182-154f-163f-5f0f9a621d72"

    return LambdaContext()


def capture_metrics_output_multiple_emf_objects(capsys):
    return [json.loads(line.strip()) for line in capsys.readouterr().out.split("\n") if line]


def test_log_metrics(capsys, lambda_context):
    assert_multiple_emf_blobs_module.lambda_handler({}, lambda_context)

    cold_start_blob, custom_metrics_blob = capture_metrics_output_multiple_emf_objects(capsys)

    # Since `capture_cold_start_metric` is used
    # we should have one JSON blob for cold start metric and one for the application
    assert cold_start_blob["ColdStart"] == [1.0]
    assert cold_start_blob["function_name"] == "test"

    assert "SuccessfulBooking" in custom_metrics_blob
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
from aws_lambda_powertools import Metrics
from aws_lambda_powertools.metrics import MetricUnit
from aws_lambda_powertools.utilities.typing import LambdaContext

metrics = Metrics()


@metrics.log_metrics(capture_cold_start_metric=True)
def lambda_handler(event: dict, context: LambdaContext):
    metrics.add_metric(name="SuccessfulBooking", unit=MetricUnit.Count, value=1)
Tip

For more elaborate assertions and comparisons, check out our functional testing for Metrics utility.


Last update: 2022-08-05