Metrics
Metrics creates custom metrics asynchronously by logging metrics to standard output following Amazon CloudWatch Embedded Metric Format (EMF).
These metrics can be visualized through Amazon CloudWatch Console.
Key features¶
- Aggregate up to 100 metrics using a single CloudWatch EMF object (large JSON blob)
- Validate against common metric definitions mistakes (metric unit, values, max dimensions, max metrics, etc)
- Metrics are created asynchronously by CloudWatch service, no custom stacks needed
- Context manager to create a one off metric with a different dimension
Terminologies¶
If you're new to Amazon CloudWatch, there are two terminologies you must be aware of before using this utility:
- Namespace. It's the highest level container that will group multiple metrics from multiple services for a given application, for example
ServerlessEcommerce
. - Dimensions. Metrics metadata in key-value format. They help you slice and dice metrics visualization, for example
ColdStart
metric by Paymentservice
.
Getting started¶
Metric has two global settings that will be used across all metrics emitted:
Setting | Description | Environment variable | Constructor parameter |
---|---|---|---|
Metric namespace | Logical container where all metrics will be placed e.g. ServerlessAirline |
POWERTOOLS_METRICS_NAMESPACE |
namespace |
Service | Optionally, sets service metric dimension across all metrics e.g. payment |
POWERTOOLS_SERVICE_NAME |
service |
Use your application or main service as the metric namespace to easily group all metrics
Example using AWS Serverless Application Model (SAM)
1 2 3 4 5 6 7 8 9 |
|
1 2 3 4 5 6 |
|
You can initialize Metrics anywhere in your code - It'll keep track of your aggregate metrics in memory.
Creating metrics¶
You can create metrics using add_metric
, and you can create dimensions for all your aggregate metrics using add_dimension
method.
1 2 3 4 5 |
|
1 2 3 4 5 6 |
|
Autocomplete Metric Units
MetricUnit
enum facilitate finding a supported metric unit by CloudWatch. Alternatively, you can pass the value as a string if you already know them e.g. "Count".
Metrics overflow
CloudWatch EMF supports a max of 100 metrics per batch. Metrics utility will flush all metrics when adding the 100th metric. Subsequent metrics, e.g. 101th, will be aggregated into a new EMF object, for your convenience.
Flushing metrics¶
As you finish adding all your metrics, you need to serialize and flush them to standard output. You can do that automatically with the log_metrics
decorator.
This decorator also validates, serializes, and flushes all your metrics. During metrics validation, if no metrics are provided then a warning will be logged, but no exception will be raised.
1 2 3 4 5 6 7 8 9 |
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 |
|
Metric validation
If metrics are provided, and any of the following criteria are not met, SchemaValidationError
exception will be raised:
- Maximum of 9 dimensions
- Namespace is set, and no more than one
- Metric units must be supported by CloudWatch
Raising SchemaValidationError on empty metrics¶
If you want to ensure that at least one metric is emitted, you can pass raise_on_empty_metrics
to the log_metrics decorator:
1 2 3 4 5 6 7 |
|
Suppressing warning messages on empty metrics
If you expect your function to execute without publishing metrics every time, you can suppress the warning with warnings.filterwarnings("ignore", "No metrics to publish*")
.
Nesting multiple middlewares¶
When using multiple middlewares, use log_metrics
as your last decorator wrapping all subsequent ones to prevent early Metric validations when code hasn't been run yet.
1 2 3 4 5 6 7 8 9 10 11 12 13 |
|
Capturing cold start metric¶
You can optionally capture cold start metrics with log_metrics
decorator via capture_cold_start_metric
param.
1 2 3 4 5 6 7 |
|
If it's a cold start invocation, this feature will:
- Create a separate EMF blob solely containing a metric named
ColdStart
- Add
function_name
andservice
dimensions
This has the advantage of keeping cold start metric separate from your application metrics, where you might have unrelated dimensions.
Advanced¶
Adding metadata¶
You can add high-cardinality data as part of your Metrics log with add_metadata
method. This is useful when you want to search highly contextual information along with your metrics in your logs.
Info
This will not be available during metrics visualization - Use dimensions for this purpose
1 2 3 4 5 6 |
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 |
|
Single metric with a different dimension¶
CloudWatch EMF uses the same dimensions across all your metrics. Use single_metric
if you have a metric that should have different dimensions.
Info
Generally, this would be an edge case since you pay for unique metric. Keep the following formula in mind:
unique metric = (metric_name + dimension_name + dimension_value)
1 2 3 4 5 6 |
|
Flushing metrics manually¶
If you prefer not to use log_metrics
because you might want to encapsulate additional logic when doing so, you can manually flush and clear metrics as follows:
Warning
Metrics, dimensions and namespace validation still applies.
1 2 3 4 5 6 7 8 9 10 |
|
Testing your code¶
Environment variables¶
Use POWERTOOLS_METRICS_NAMESPACE
and POWERTOOLS_SERVICE_NAME
env vars when unit testing your code to ensure metric namespace and dimension objects are created, and your code doesn't fail validation.
1 |
|
If you prefer setting environment variable for specific tests, and are using Pytest, you can use monkeypatch fixture:
1 2 3 4 5 6 |
|
Ignore this, if you are explicitly setting namespace/default dimension via
namespace
andservice
parameters:metrics = Metrics(namespace=ApplicationName, service=ServiceName)
Clearing metrics¶
Metrics
keep metrics in memory across multiple instances. If you need to test this behaviour, you can use the following Pytest fixture to ensure metrics are reset incl. cold start:
1 2 3 4 5 6 7 |
|