Module aws_lambda_powertools.shared.json_encoder

Expand source code
import decimal
import json
import math

from aws_lambda_powertools.shared.functions import dataclass_to_dict, is_dataclass, is_pydantic, pydantic_to_dict


class Encoder(json.JSONEncoder):
    """Custom JSON encoder to allow for serialization of Decimals, Pydantic and Dataclasses.

    It's similar to the serializer used by Lambda internally.
    """

    def default(self, obj):
        if isinstance(obj, decimal.Decimal):
            if obj.is_nan():
                return math.nan
            return str(obj)

        if is_pydantic(obj):
            return pydantic_to_dict(obj)

        if is_dataclass(obj):
            return dataclass_to_dict(obj)

        return super().default(obj)

Classes

class Encoder (*, skipkeys=False, ensure_ascii=True, check_circular=True, allow_nan=True, sort_keys=False, indent=None, separators=None, default=None)

Custom JSON encoder to allow for serialization of Decimals, Pydantic and Dataclasses.

It's similar to the serializer used by Lambda internally.

Constructor for JSONEncoder, with sensible defaults.

If skipkeys is false, then it is a TypeError to attempt encoding of keys that are not str, int, float or None. If skipkeys is True, such items are simply skipped.

If ensure_ascii is true, the output is guaranteed to be str objects with all incoming non-ASCII characters escaped. If ensure_ascii is false, the output can contain non-ASCII characters.

If check_circular is true, then lists, dicts, and custom encoded objects will be checked for circular references during encoding to prevent an infinite recursion (which would cause an RecursionError). Otherwise, no such check takes place.

If allow_nan is true, then NaN, Infinity, and -Infinity will be encoded as such. This behavior is not JSON specification compliant, but is consistent with most JavaScript based encoders and decoders. Otherwise, it will be a ValueError to encode such floats.

If sort_keys is true, then the output of dictionaries will be sorted by key; this is useful for regression tests to ensure that JSON serializations can be compared on a day-to-day basis.

If indent is a non-negative integer, then JSON array elements and object members will be pretty-printed with that indent level. An indent level of 0 will only insert newlines. None is the most compact representation.

If specified, separators should be an (item_separator, key_separator) tuple. The default is (', ', ': ') if indent is None and (',', ': ') otherwise. To get the most compact JSON representation, you should specify (',', ':') to eliminate whitespace.

If specified, default is a function that gets called for objects that can't otherwise be serialized. It should return a JSON encodable version of the object or raise a TypeError.

Expand source code
class Encoder(json.JSONEncoder):
    """Custom JSON encoder to allow for serialization of Decimals, Pydantic and Dataclasses.

    It's similar to the serializer used by Lambda internally.
    """

    def default(self, obj):
        if isinstance(obj, decimal.Decimal):
            if obj.is_nan():
                return math.nan
            return str(obj)

        if is_pydantic(obj):
            return pydantic_to_dict(obj)

        if is_dataclass(obj):
            return dataclass_to_dict(obj)

        return super().default(obj)

Ancestors

  • json.encoder.JSONEncoder

Methods

def default(self, obj)

Implement this method in a subclass such that it returns a serializable object for o, or calls the base implementation (to raise a TypeError).

For example, to support arbitrary iterators, you could implement default like this::

def default(self, o):
    try:
        iterable = iter(o)
    except TypeError:
        pass
    else:
        return list(iterable)
    # Let the base class default method raise the TypeError
    return JSONEncoder.default(self, o)
Expand source code
def default(self, obj):
    if isinstance(obj, decimal.Decimal):
        if obj.is_nan():
            return math.nan
        return str(obj)

    if is_pydantic(obj):
        return pydantic_to_dict(obj)

    if is_dataclass(obj):
        return dataclass_to_dict(obj)

    return super().default(obj)