Skip to content

EMF

CLASS DESCRIPTION
AmazonCloudWatchEMFProvider

AmazonCloudWatchEMFProvider creates metrics asynchronously via CloudWatch Embedded Metric Format (EMF).

AmazonCloudWatchEMFProvider

AmazonCloudWatchEMFProvider(
    metric_set: dict[str, Any] | None = None,
    dimension_set: dict | None = None,
    namespace: str | None = None,
    metadata_set: dict[str, Any] | None = None,
    service: str | None = None,
    default_dimensions: dict[str, Any] | None = None,
)

Bases: BaseProvider

AmazonCloudWatchEMFProvider creates metrics asynchronously via CloudWatch Embedded Metric Format (EMF).

CloudWatch EMF can create up to 100 metrics per EMF object and metrics, dimensions, and namespace created via AmazonCloudWatchEMFProvider will adhere to the schema, will be serialized and validated against EMF Schema.

Use aws_lambda_powertools.Metrics or aws_lambda_powertools.single_metric to create EMF metrics.

Environment variables

POWERTOOLS_METRICS_NAMESPACE : str metric namespace to be set for all metrics POWERTOOLS_SERVICE_NAME : str service name used for default dimension

RAISES DESCRIPTION
MetricUnitError

When metric unit isn't supported by CloudWatch

MetricResolutionError

When metric resolution isn't supported by CloudWatch

MetricValueError

When metric value isn't a number

SchemaValidationError

When metric object fails EMF schema validation

METHOD DESCRIPTION
add_cold_start_metric

Add cold start metric and function_name dimension

add_dimension

Adds given dimension to all metrics

add_metadata

Adds high cardinal metadata for metrics object

add_metric

Adds given metric

flush_metrics

Manually flushes the metrics. This is normally not necessary,

log_metrics

Decorator to serialize and publish metrics at the end of a function execution.

serialize_metric_set

Serializes metric and dimensions set

set_default_dimensions

Persist dimensions across Lambda invocations

set_timestamp

Set the timestamp for the metric.

Source code in aws_lambda_powertools/metrics/provider/cloudwatch_emf/cloudwatch.py
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
def __init__(
    self,
    metric_set: dict[str, Any] | None = None,
    dimension_set: dict | None = None,
    namespace: str | None = None,
    metadata_set: dict[str, Any] | None = None,
    service: str | None = None,
    default_dimensions: dict[str, Any] | None = None,
):
    self.metric_set = metric_set if metric_set is not None else {}
    self.dimension_set = dimension_set if dimension_set is not None else {}
    self.default_dimensions = default_dimensions or {}
    self.namespace = resolve_env_var_choice(choice=namespace, env=os.getenv(constants.METRICS_NAMESPACE_ENV))
    self.service = resolve_env_var_choice(choice=service, env=os.getenv(constants.SERVICE_NAME_ENV))
    self.metadata_set = metadata_set if metadata_set is not None else {}
    self.timestamp: int | None = None

    self._metric_units = [unit.value for unit in MetricUnit]
    self._metric_unit_valid_options = list(MetricUnit.__members__)
    self._metric_resolutions = [resolution.value for resolution in MetricResolution]

    self.dimension_set.update(**self.default_dimensions)

add_cold_start_metric

add_cold_start_metric(context: LambdaContext) -> None

Add cold start metric and function_name dimension

PARAMETER DESCRIPTION
context

Lambda context

TYPE: Any

Source code in aws_lambda_powertools/metrics/provider/cloudwatch_emf/cloudwatch.py
436
437
438
439
440
441
442
443
444
445
446
447
448
def add_cold_start_metric(self, context: LambdaContext) -> None:
    """Add cold start metric and function_name dimension

    Parameters
    ----------
    context : Any
        Lambda context
    """
    logger.debug("Adding cold start metric and function_name dimension")
    with single_metric(name="ColdStart", unit=MetricUnit.Count, value=1, namespace=self.namespace) as metric:
        metric.add_dimension(name="function_name", value=context.function_name)
        if self.service:
            metric.add_dimension(name="service", value=str(self.service))

add_dimension

add_dimension(name: str, value: str) -> None

Adds given dimension to all metrics

Example

Add a metric dimensions

1
metric.add_dimension(name="operation", value="confirm_booking")
PARAMETER DESCRIPTION
name

Dimension name

TYPE: str

value

Dimension value

TYPE: str

Source code in aws_lambda_powertools/metrics/provider/cloudwatch_emf/cloudwatch.py
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
def add_dimension(self, name: str, value: str) -> None:
    """Adds given dimension to all metrics

    Example
    -------
    **Add a metric dimensions**

        metric.add_dimension(name="operation", value="confirm_booking")

    Parameters
    ----------
    name : str
        Dimension name
    value : str
        Dimension value
    """
    logger.debug(f"Adding dimension: {name}:{value}")
    if len(self.dimension_set) == MAX_DIMENSIONS:
        raise SchemaValidationError(
            f"Maximum number of dimensions exceeded ({MAX_DIMENSIONS}): Unable to add dimension {name}.",
        )

    value = value if isinstance(value, str) else str(value)

    if not name.strip() or not value.strip():
        warnings.warn(
            f"The dimension {name} doesn't meet the requirements and won't be added. "
            "Ensure the dimension name and value are non-empty strings",
            category=PowertoolsUserWarning,
            stacklevel=2,
        )
        return

    if name in self.dimension_set or name in self.default_dimensions:
        warnings.warn(
            f"Dimension '{name}' has already been added. The previous value will be overwritten.",
            category=PowertoolsUserWarning,
            stacklevel=2,
        )

    self.dimension_set[name] = value

add_metadata

add_metadata(key: str, value: Any) -> None

Adds high cardinal metadata for metrics object

This will not be available during metrics visualization. Instead, this will be searchable through logs.

If you're looking to add metadata to filter metrics, then use add_dimension method.

Example

Add metrics metadata

1
metric.add_metadata(key="booking_id", value="booking_id")
PARAMETER DESCRIPTION
key

Metadata key

TYPE: str

value

Metadata value

TYPE: any

Source code in aws_lambda_powertools/metrics/provider/cloudwatch_emf/cloudwatch.py
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
def add_metadata(self, key: str, value: Any) -> None:
    """Adds high cardinal metadata for metrics object

    This will not be available during metrics visualization.
    Instead, this will be searchable through logs.

    If you're looking to add metadata to filter metrics, then
    use add_dimension method.

    Example
    -------
    **Add metrics metadata**

        metric.add_metadata(key="booking_id", value="booking_id")

    Parameters
    ----------
    key : str
        Metadata key
    value : any
        Metadata value
    """
    logger.debug(f"Adding metadata: {key}:{value}")

    # Cast key to str according to EMF spec
    # Majority of keys are expected to be string already, so
    # checking before casting improves performance in most cases
    if isinstance(key, str):
        self.metadata_set[key] = value
    else:
        self.metadata_set[str(key)] = value

add_metric

add_metric(
    name: str,
    unit: MetricUnit | str,
    value: float,
    resolution: MetricResolution | int = 60,
) -> None

Adds given metric

Example

Add given metric using MetricUnit enum

1
metric.add_metric(name="BookingConfirmation", unit=MetricUnit.Count, value=1)

Add given metric using plain string as value unit

1
metric.add_metric(name="BookingConfirmation", unit="Count", value=1)

Add given metric with MetricResolution non default value

1
metric.add_metric(name="BookingConfirmation", unit="Count", value=1, resolution=MetricResolution.High)
PARAMETER DESCRIPTION
name

Metric name

TYPE: str

unit

aws_lambda_powertools.helper.models.MetricUnit

TYPE: MetricUnit | str

value

Metric value

TYPE: float

resolution

aws_lambda_powertools.helper.models.MetricResolution

TYPE: MetricResolution | int DEFAULT: 60

RAISES DESCRIPTION
MetricUnitError

When metric unit is not supported by CloudWatch

MetricResolutionError

When metric resolution is not supported by CloudWatch

Source code in aws_lambda_powertools/metrics/provider/cloudwatch_emf/cloudwatch.py
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
def add_metric(
    self,
    name: str,
    unit: MetricUnit | str,
    value: float,
    resolution: MetricResolution | int = 60,
) -> None:
    """Adds given metric

    Example
    -------
    **Add given metric using MetricUnit enum**

        metric.add_metric(name="BookingConfirmation", unit=MetricUnit.Count, value=1)

    **Add given metric using plain string as value unit**

        metric.add_metric(name="BookingConfirmation", unit="Count", value=1)

    **Add given metric with MetricResolution non default value**

        metric.add_metric(name="BookingConfirmation", unit="Count", value=1, resolution=MetricResolution.High)

    Parameters
    ----------
    name : str
        Metric name
    unit : MetricUnit | str
        `aws_lambda_powertools.helper.models.MetricUnit`
    value : float
        Metric value
    resolution : MetricResolution | int
        `aws_lambda_powertools.helper.models.MetricResolution`

    Raises
    ------
    MetricUnitError
        When metric unit is not supported by CloudWatch
    MetricResolutionError
        When metric resolution is not supported by CloudWatch
    """
    if not isinstance(value, numbers.Number):
        raise MetricValueError(f"{value} is not a valid number")

    unit = extract_cloudwatch_metric_unit_value(
        metric_units=self._metric_units,
        metric_valid_options=self._metric_unit_valid_options,
        unit=unit,
    )
    resolution = extract_cloudwatch_metric_resolution_value(
        metric_resolutions=self._metric_resolutions,
        resolution=resolution,
    )
    metric: dict = self.metric_set.get(name, defaultdict(list))
    metric["Unit"] = unit
    metric["StorageResolution"] = resolution
    metric["Value"].append(float(value))
    logger.debug(f"Adding metric: {name} with {metric}")
    self.metric_set[name] = metric

    if len(self.metric_set) == MAX_METRICS or len(metric["Value"]) == MAX_METRICS:
        logger.debug(f"Exceeded maximum of {MAX_METRICS} metrics - Publishing existing metric set")
        metrics = self.serialize_metric_set()
        print(json.dumps(metrics))

        # clear metric set only as opposed to metrics and dimensions set
        # since we could have more than 100 metrics
        self.metric_set.clear()

flush_metrics

flush_metrics(raise_on_empty_metrics: bool = False) -> None

Manually flushes the metrics. This is normally not necessary, unless you're running on other runtimes besides Lambda, where the @log_metrics decorator already handles things for you.

PARAMETER DESCRIPTION
raise_on_empty_metrics

raise exception if no metrics are emitted, by default False

TYPE: bool DEFAULT: False

Source code in aws_lambda_powertools/metrics/provider/cloudwatch_emf/cloudwatch.py
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
def flush_metrics(self, raise_on_empty_metrics: bool = False) -> None:
    """Manually flushes the metrics. This is normally not necessary,
    unless you're running on other runtimes besides Lambda, where the @log_metrics
    decorator already handles things for you.

    Parameters
    ----------
    raise_on_empty_metrics : bool, optional
        raise exception if no metrics are emitted, by default False
    """
    if not raise_on_empty_metrics and not self.metric_set:
        warnings.warn(
            "No application metrics to publish. The cold-start metric may be published if enabled. "
            "If application metrics should never be empty, consider using 'raise_on_empty_metrics'",
            stacklevel=2,
        )
    else:
        logger.debug("Flushing existing metrics")
        metrics = self.serialize_metric_set()
        print(json.dumps(metrics, separators=(",", ":")))
        self.clear_metrics()

log_metrics

log_metrics(
    lambda_handler: AnyCallableT | None = None,
    capture_cold_start_metric: bool = False,
    raise_on_empty_metrics: bool = False,
    **kwargs
)

Decorator to serialize and publish metrics at the end of a function execution.

Be aware that the log_metrics *does call the decorated function (e.g. lambda_handler).

Example

Lambda function using tracer and metrics decorators

1
2
3
4
5
6
7
8
9
from aws_lambda_powertools import Metrics, Tracer

metrics = Metrics(service="payment")
tracer = Tracer(service="payment")

@tracer.capture_lambda_handler
@metrics.log_metrics
def handler(event, context):
        ...
PARAMETER DESCRIPTION
lambda_handler

lambda function handler, by default None

TYPE: Callable[[Any, Any], Any] DEFAULT: None

capture_cold_start_metric

captures cold start metric, by default False

TYPE: bool DEFAULT: False

raise_on_empty_metrics

raise exception if no metrics are emitted, by default False

TYPE: bool DEFAULT: False

**kwargs

DEFAULT: {}

RAISES DESCRIPTION
e

Propagate error received

Source code in aws_lambda_powertools/metrics/provider/cloudwatch_emf/cloudwatch.py
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
def log_metrics(
    self,
    lambda_handler: AnyCallableT | None = None,
    capture_cold_start_metric: bool = False,
    raise_on_empty_metrics: bool = False,
    **kwargs,
):
    """Decorator to serialize and publish metrics at the end of a function execution.

    Be aware that the log_metrics **does call* the decorated function (e.g. lambda_handler).

    Example
    -------
    **Lambda function using tracer and metrics decorators**

        from aws_lambda_powertools import Metrics, Tracer

        metrics = Metrics(service="payment")
        tracer = Tracer(service="payment")

        @tracer.capture_lambda_handler
        @metrics.log_metrics
        def handler(event, context):
                ...

    Parameters
    ----------
    lambda_handler : Callable[[Any, Any], Any], optional
        lambda function handler, by default None
    capture_cold_start_metric : bool, optional
        captures cold start metric, by default False
    raise_on_empty_metrics : bool, optional
        raise exception if no metrics are emitted, by default False
    **kwargs

    Raises
    ------
    e
        Propagate error received
    """

    default_dimensions = kwargs.get("default_dimensions")

    if default_dimensions:
        self.set_default_dimensions(**default_dimensions)

    return super().log_metrics(
        lambda_handler=lambda_handler,
        capture_cold_start_metric=capture_cold_start_metric,
        raise_on_empty_metrics=raise_on_empty_metrics,
        **kwargs,
    )

serialize_metric_set

serialize_metric_set(
    metrics: dict | None = None,
    dimensions: dict | None = None,
    metadata: dict | None = None,
) -> CloudWatchEMFOutput

Serializes metric and dimensions set

PARAMETER DESCRIPTION
metrics

Dictionary of metrics to serialize, by default None

TYPE: dict DEFAULT: None

dimensions

Dictionary of dimensions to serialize, by default None

TYPE: dict DEFAULT: None

metadata

Dictionary of metadata to serialize, by default None

TYPE: dict | None DEFAULT: None

Example

Serialize metrics into EMF format

1
2
3
metrics = MetricManager()
# ...add metrics, dimensions, namespace
ret = metrics.serialize_metric_set()
RETURNS DESCRIPTION
CloudWatchEMFOutput

Serialized metrics following EMF specification

RAISES DESCRIPTION
SchemaValidationError

Raised when serialization fail schema validation

Source code in aws_lambda_powertools/metrics/provider/cloudwatch_emf/cloudwatch.py
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
def serialize_metric_set(
    self,
    metrics: dict | None = None,
    dimensions: dict | None = None,
    metadata: dict | None = None,
) -> CloudWatchEMFOutput:
    """Serializes metric and dimensions set

    Parameters
    ----------
    metrics : dict, optional
        Dictionary of metrics to serialize, by default None
    dimensions : dict, optional
        Dictionary of dimensions to serialize, by default None
    metadata: dict, optional
        Dictionary of metadata to serialize, by default None

    Example
    -------
    **Serialize metrics into EMF format**

        metrics = MetricManager()
        # ...add metrics, dimensions, namespace
        ret = metrics.serialize_metric_set()

    Returns
    -------
    CloudWatchEMFOutput
        Serialized metrics following EMF specification

    Raises
    ------
    SchemaValidationError
        Raised when serialization fail schema validation
    """
    if metrics is None:  # pragma: no cover
        metrics = self.metric_set

    if dimensions is None:  # pragma: no cover
        dimensions = self.dimension_set

    if metadata is None:  # pragma: no cover
        metadata = self.metadata_set

    if self.service and not self.dimension_set.get("service"):
        # self.service won't be a float
        self.add_dimension(name="service", value=self.service)

    if len(metrics) == 0:
        raise SchemaValidationError("Must contain at least one metric.")

    if self.namespace is None:
        raise SchemaValidationError("Must contain a metric namespace.")

    logger.debug({"details": "Serializing metrics", "metrics": metrics, "dimensions": dimensions})

    # For standard resolution metrics, don't add StorageResolution field to avoid unnecessary ingestion of data into cloudwatch # noqa E501
    # Example: [ { "Name": "metric_name", "Unit": "Count"} ] # noqa ERA001
    #
    # In case using high-resolution metrics, add StorageResolution field
    # Example: [ { "Name": "metric_name", "Unit": "Count", "StorageResolution": 1 } ] # noqa ERA001
    metric_definition: list[MetricNameUnitResolution] = []
    metric_names_and_values: dict[str, float] = {}  # { "metric_name": 1.0 }

    for metric_name in metrics:
        metric: dict = metrics[metric_name]
        metric_value: int = metric.get("Value", 0)
        metric_unit: str = metric.get("Unit", "")
        metric_resolution: int = metric.get("StorageResolution", 60)

        metric_definition_data: MetricNameUnitResolution = {"Name": metric_name, "Unit": metric_unit}

        # high-resolution metrics
        if metric_resolution == 1:
            metric_definition_data["StorageResolution"] = metric_resolution

        metric_definition.append(metric_definition_data)

        metric_names_and_values.update({metric_name: metric_value})

    return {
        "_aws": {
            "Timestamp": self.timestamp or int(datetime.datetime.now().timestamp() * 1000),  # epoch
            "CloudWatchMetrics": [
                {
                    "Namespace": self.namespace,  # "test_namespace"
                    "Dimensions": [list(dimensions.keys())],  # [ "service" ]
                    "Metrics": metric_definition,
                },
            ],
        },
        # NOTE: Mypy doesn't recognize splats '** syntax' in TypedDict
        **dimensions,  # "service": "test_service"
        **metadata,  # type: ignore[typeddict-item] # "username": "test"
        **metric_names_and_values,  # "single_metric": 1.0
    }

set_default_dimensions

set_default_dimensions(**dimensions) -> None

Persist dimensions across Lambda invocations

PARAMETER DESCRIPTION
dimensions

metric dimensions as key=value

TYPE: dict[str, Any] DEFAULT: {}

Example

Sets some default dimensions that will always be present across metrics and invocations

1
2
3
4
5
6
7
8
from aws_lambda_powertools import Metrics

metrics = Metrics(namespace="ServerlessAirline", service="payment")
metrics.set_default_dimensions(environment="demo", another="one")

@metrics.log_metrics()
def lambda_handler():
    return True
Source code in aws_lambda_powertools/metrics/provider/cloudwatch_emf/cloudwatch.py
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
def set_default_dimensions(self, **dimensions) -> None:
    """Persist dimensions across Lambda invocations

    Parameters
    ----------
    dimensions : dict[str, Any], optional
        metric dimensions as key=value

    Example
    -------
    **Sets some default dimensions that will always be present across metrics and invocations**

        from aws_lambda_powertools import Metrics

        metrics = Metrics(namespace="ServerlessAirline", service="payment")
        metrics.set_default_dimensions(environment="demo", another="one")

        @metrics.log_metrics()
        def lambda_handler():
            return True
    """
    for name, value in dimensions.items():
        self.add_dimension(name, value)

    self.default_dimensions.update(**dimensions)

set_timestamp

set_timestamp(timestamp: int | datetime.datetime)

Set the timestamp for the metric.

PARAMETER DESCRIPTION
timestamp

The timestamp to create the metric. If an integer is provided, it is assumed to be the epoch time in milliseconds. If a datetime object is provided, it will be converted to epoch time in milliseconds.

TYPE: int | datetime

Source code in aws_lambda_powertools/metrics/provider/cloudwatch_emf/cloudwatch.py
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
def set_timestamp(self, timestamp: int | datetime.datetime):
    """
    Set the timestamp for the metric.

    Parameters
    -----------
    timestamp: int | datetime.datetime
        The timestamp to create the metric.
        If an integer is provided, it is assumed to be the epoch time in milliseconds.
        If a datetime object is provided, it will be converted to epoch time in milliseconds.
    """
    # The timestamp must be a Datetime object or an integer representing an epoch time.
    # This should not exceed 14 days in the past or be more than 2 hours in the future.
    # Any metrics failing to meet this criteria will be skipped by Amazon CloudWatch.
    # See: https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch_Embedded_Metric_Format_Specification.html
    # See: https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/CloudWatch-Logs-Monitoring-CloudWatch-Metrics.html
    if not validate_emf_timestamp(timestamp):
        warnings.warn(
            "This metric doesn't meet the requirements and will be skipped by Amazon CloudWatch. "
            "Ensure the timestamp is within 14 days past or 2 hours future.",
            stacklevel=2,
        )

    self.timestamp = convert_timestamp_to_emf_format(timestamp)