Skip to content

Build tools

This guide covers different build tools and dependency managers for packaging Lambda functions with Powertools for AWS Lambda (Python). Each tool has its strengths and is optimized for different use cases.

Requirements file security

For simplicity, examples in this guide use requirements.txt files with pinned versions. In production environments, you should use hash-checking for enhanced security by including --hash flags. Learn more about secure package installation in the pip documentation.

pip

pip is Python's standard package installer - simple, reliable, and available everywhere. Perfect for straightforward Lambda functions where you need basic dependency management without complex workflows.

Cross-platform compatibility

Always use --platform manylinux2014_x86_64 and --only-binary=:all: flags when building on non-Linux systems to ensure Lambda compatibility. This forces pip to download Linux-compatible wheels instead of compiling from source.

Basic setup

1
2
3
aws-lambda-powertools[all]==3.18.0
pydantic==2.10.4
requests>=2.32.4
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
from aws_lambda_powertools import Logger, Metrics, Tracer
from aws_lambda_powertools.event_handler import APIGatewayRestResolver
from aws_lambda_powertools.logging import correlation_paths
from aws_lambda_powertools.metrics import MetricUnit

logger = Logger()
tracer = Tracer()
metrics = Metrics()
app = APIGatewayRestResolver()


@app.get("/hello")
def hello():
    logger.info("Hello World API called")
    metrics.add_metric(name="HelloWorldInvocations", unit=MetricUnit.Count, value=1)
    return {"message": "Hello World from Powertools!"}


@app.get("/health")
def health_check():
    return {"status": "healthy", "service": "powertools-pip-example"}


@logger.inject_lambda_context(correlation_id_path=correlation_paths.API_GATEWAY_REST)
@tracer.capture_lambda_handler
@metrics.log_metrics(capture_cold_start_metric=True)
def lambda_handler(event, context):
    return app.resolve(event, context)
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
#!/bin/bash

# Create build directory
mkdir -p build/

# Install dependencies with Lambda-compatible wheels
pip install --platform manylinux2014_x86_64 --only-binary=:all: \
    --python-version 3.13 --target build/ \
    -r requirements.txt

# Copy application code
cp app_pip.py build/

# Create deployment package
cd build && zip -r ../lambda-deployment.zip . && cd ..

echo "✅ Deployment package created: lambda-deployment.zip"

Advanced pip with Lambda Layers

Optimize your deployment by using Lambda layers for Powertools for AWS:

1
aws-lambda-powertools[all]==3.18.0
1
2
pydantic==2.10.4
requests>=2.32.4
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
from aws_lambda_powertools import Logger, Metrics, Tracer
from aws_lambda_powertools.event_handler import APIGatewayRestResolver
from aws_lambda_powertools.logging import correlation_paths
from aws_lambda_powertools.metrics import MetricUnit

logger = Logger()
tracer = Tracer()
metrics = Metrics()
app = APIGatewayRestResolver()


@app.get("/hello")
def hello():
    logger.info("Hello World API called")
    metrics.add_metric(name="HelloWorldInvocations", unit=MetricUnit.Count, value=1)
    return {"message": "Hello World from Powertools!"}


@app.get("/health")
def health_check():
    return {"status": "healthy", "service": "powertools-pip-example"}


@logger.inject_lambda_context(correlation_id_path=correlation_paths.API_GATEWAY_REST)
@tracer.capture_lambda_handler
@metrics.log_metrics(capture_cold_start_metric=True)
def lambda_handler(event, context):
    return app.resolve(event, context)
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
#!/bin/bash

# Build Lambda Layer with compatible wheels
mkdir -p layer/python/
pip install --platform manylinux2014_x86_64 --only-binary=:all: \
    --python-version 3.13 --target layer/python/ \
    -r requirements-layer.txt
cd layer && zip -r ../powertools-layer.zip . && cd ..

# Build application package (smaller without Powertools)
mkdir -p build/
pip install --platform manylinux2014_x86_64 --only-binary=:all: \
    --python-version 3.13 --target build/ \
    -r requirements-app.txt
cp app_pip.py build/
cd build && zip -r ../lambda-app.zip . && cd ..

echo "✅ Layer created: powertools-layer.zip"
echo "✅ App package created: lambda-app.zip"

Cross-platform builds

Build packages for different Lambda architectures using platform-specific wheels:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
#!/bin/bash

# Build for Lambda x86_64 (most common)
mkdir -p build-x86_64/
pip install --platform manylinux2014_x86_64 --only-binary=:all: \
    --python-version 3.13 --target build-x86_64/ \
    -r requirements.txt

# Build for Lambda ARM64 (Graviton2)
mkdir -p build-arm64/
pip install --platform manylinux2014_aarch64 --only-binary=:all: \
    --python-version 3.13 --target build-arm64/ \
    -r requirements.txt

# Copy application code to both builds
cp app_pip.py build-x86_64/
cp app_pip.py build-arm64/

# Create deployment packages
cd build-x86_64 && zip -r ../lambda-x86_64.zip . && cd ..
cd build-arm64 && zip -r ../lambda-arm64.zip . && cd ..

echo "✅ x86_64 package: lambda-x86_64.zip"
echo "✅ ARM64 package: lambda-arm64.zip"

Platform compatibility

Platform Flag Lambda Architecture Use Case
manylinux2014_x86_64 x86_64 Standard Lambda functions
manylinux2014_aarch64 arm64 Graviton-based functions (lower cost)
Architecture selection
  • x86_64: Broader package compatibility, more mature ecosystem
  • arm64: Up to 20% better price-performance, newer architecture

uv

uv is an extremely fast Python package manager written in Rust, designed as a drop-in replacement for pip and pip-tools. It offers 10-100x faster dependency resolution and installation, making it ideal for CI/CD pipelines and performance-critical builds. Learn more at docs.astral.sh/uv/.

Cross-platform compatibility

Use uv pip install with --platform manylinux2014_x86_64 and --only-binary=:all: flags when building on non-Linux systems. This ensures Lambda-compatible wheels are downloaded instead of compiling from source.

Setup uv

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
[project]
name = "lambda-powertools-uv"
version = "0.1.0"
description = "Lambda function with Powertools using uv"
requires-python = ">=3.9"
dependencies = [
    "aws-lambda-powertools[all]>=3.18.0",
    "pydantic>=2.10.0",
    "requests>=2.32.0",
]

[project.optional-dependencies]
dev = [
    "pytest>=8.0.0",
    "black>=24.0.0",
    "mypy>=1.8.0",
]
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
from __future__ import annotations

from typing import Any

from aws_lambda_powertools import Logger, Metrics, Tracer
from aws_lambda_powertools.event_handler import APIGatewayRestResolver
from aws_lambda_powertools.utilities.typing import LambdaContext

logger = Logger()
tracer = Tracer()
metrics = Metrics()
app = APIGatewayRestResolver()


@app.get("/health")
def health_check():
    return {"status": "healthy", "service": "lambda-powertools-uv"}


@app.get("/metrics")
def get_metrics():
    metrics.add_metric(name="MetricsEndpointCalled", unit="Count", value=1)
    return {"message": "Metrics recorded"}


@logger.inject_lambda_context
@tracer.capture_lambda_handler
@metrics.log_metrics
def lambda_handler(event: dict[str, Any], context: LambdaContext):
    return app.resolve(event, context)
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
#!/bin/bash

# Create build directory
mkdir -p build/

# Install dependencies with Lambda-compatible wheels
uv pip install --platform manylinux2014_x86_64 --only-binary=:all: \
    --python-version 3.13 --target build/ \
    -e .

# Copy application code
cp app_uv.py build/

# Create deployment package
cd build && zip -r ../lambda-uv.zip . && cd ..

echo "✅ uv deployment package created: lambda-uv.zip"

uv with lock file for reproducible builds

Generate and use lock files to ensure exact dependency versions across all environments and team members.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
#!/bin/bash

# Generate lock file for reproducible builds
uv lock

# Export to requirements.txt for Lambda
uv export --format requirements-txt --no-hashes > requirements.txt

# Create build directory
mkdir -p build/

# Install to build directory with Lambda-compatible wheels
uv pip install --platform manylinux2014_x86_64 --only-binary=:all: \
    --python-version 3.13 --target build/ \
    -r requirements.txt

# Copy application code
cp app_uv.py build/

# Create deployment package
cd build && zip -r ../lambda-uv-locked.zip . && cd ..

# Cleanup
rm requirements.txt

echo "✅ uv locked deployment package created: lambda-uv-locked.zip"

Cross-platform builds with uv

Build packages for different Lambda architectures using uv's platform-specific installation:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
#!/bin/bash

# Build for Lambda x86_64 (most common)
mkdir -p build-x86_64/
uv pip install --platform manylinux2014_x86_64 --only-binary=:all: \
    --python-version 3.13 --target build-x86_64/ \
    -e .

# Build for Lambda ARM64 (Graviton2)
mkdir -p build-arm64/
uv pip install --platform manylinux2014_aarch64 --only-binary=:all: \
    --python-version 3.13 --target build-arm64/ \
    -e .

# Copy application code to both builds
cp app_uv.py build-x86_64/
cp app_uv.py build-arm64/

# Create deployment packages
cd build-x86_64 && zip -r ../lambda-uv-x86_64.zip . && cd ..
cd build-arm64 && zip -r ../lambda-uv-arm64.zip . && cd ..

echo "✅ x86_64 package: lambda-uv-x86_64.zip"
echo "✅ ARM64 package: lambda-uv-arm64.zip"

uv performance advantages

Feature uv pip Benefit
Dependency resolution Rust-based solver Python-based 10-100x faster
Parallel downloads Built-in Limited Faster package installation
Lock file generation uv lock Requires pip-tools Reproducible builds
Virtual environments uv venv Separate venv tool Integrated workflow
uv best practices for Lambda
  • Use uv lock for reproducible builds across environments
  • Leverage uv export to generate requirements.txt for deployment
  • Use --frozen flag in CI/CD to ensure exact dependency versions

Poetry

Poetry is a modern Python dependency manager that handles packaging, dependency resolution, and virtual environments. It uses lock files to ensure reproducible builds and provides excellent developer experience with semantic versioning.

Cross-platform compatibility

When building on non-Linux systems, use pip install with --platform manylinux2014_x86_64 and --only-binary=:all: flags after exporting requirements from Poetry. This ensures Lambda-compatible wheels are installed.

Setup Poetry

Prerequisites
  • Poetry 2.0+ required for optimal performance and latest features
  • Initialize a new project with poetry new my-lambda-project or poetry init in existing directory
  • Project name in pyproject.toml can be customized to match your preferences
  • See Poetry documentation for detailed project setup guide
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
[tool.poetry]
name = "lambda-powertools-app"
version = "0.1.0"
description = "Lambda function with Powertools"

[tool.poetry.dependencies]
python = "^3.10"
aws-lambda-powertools = {extras = ["all"], version = "^3.18.0"}
pydantic = "^2.10.0"
requests = "^2.32.0"

[tool.poetry.group.dev.dependencies]
pytest = "^8.0.0"
black = "^24.0.0"
mypy = "^1.8.0"

[tool.poetry.requires-plugins]
poetry-plugin-export = ">=1.8"

[build-system]
requires = ["poetry-core"]
build-backend = "poetry.core.masonry.api"
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
from typing import Optional

from pydantic import BaseModel

from aws_lambda_powertools import Logger, Metrics, Tracer
from aws_lambda_powertools.event_handler import APIGatewayRestResolver
from aws_lambda_powertools.logging import correlation_paths
from aws_lambda_powertools.metrics import MetricUnit

logger = Logger()
tracer = Tracer()
metrics = Metrics()
app = APIGatewayRestResolver()


class UserModel(BaseModel):
    name: str
    email: str
    age: Optional[int] = None


@app.post("/users")
def create_user(user: UserModel):
    logger.info("Creating user", extra={"user": user.model_dump()})
    metrics.add_metric(name="UserCreated", unit=MetricUnit.Count, value=1)
    return {"message": f"User {user.name} created successfully", "user": user.model_dump()}


@app.get("/users")
def list_users():
    logger.info("Listing users")
    metrics.add_metric(name="UsersListed", unit=MetricUnit.Count, value=1)
    return {"users": [{"name": "John Doe", "email": "john@example.com", "age": 30}]}


@logger.inject_lambda_context(correlation_id_path=correlation_paths.API_GATEWAY_REST)
@tracer.capture_lambda_handler
@metrics.log_metrics(capture_cold_start_metric=True)
def lambda_handler(event, context):
    return app.resolve(event, context)
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
#!/bin/bash

# Export requirements for Lambda
poetry export -f requirements.txt --output requirements.txt --without-hashes

# Create build directory
mkdir -p build/

# Install dependencies with Lambda-compatible wheels
pip install --platform manylinux2014_x86_64 --only-binary=:all: \
    --python-version 3.13 --target build/ \
    -r requirements.txt

# Copy application code
cp app_poetry.py build/

# Create deployment package
cd build && zip -r ../lambda-poetry.zip . && cd ..

# Cleanup
rm requirements.txt

echo "✅ Poetry deployment package created: lambda-poetry.zip"

For development or when cross-platform compatibility is not a concern:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
#!/bin/bash

# Create build directory
mkdir -p build/

# Install dependencies directly to build directory using Poetry
# Note: This method may not handle cross-platform compatibility as well
poetry install --only=main --no-root

# Copy installed packages from virtual environment
VENV_PATH=$(poetry env info --path)
cp -r "$VENV_PATH/lib/python*/site-packages"/* build/

# Copy application code
cp app_poetry.py build/

# Create deployment package
cd build && zip -r ../lambda-poetry-native.zip . && cd ..

echo "✅ Poetry native deployment package created: lambda-poetry-native.zip"
echo "⚠️  Warning: This method may have cross-platform compatibility issues"

Cross-platform builds with Poetry

Build packages for different Lambda architectures by combining Poetry's dependency management with pip's platform-specific installation:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
#!/bin/bash

# Export requirements for Lambda
poetry export -f requirements.txt --output requirements.txt --without-hashes

# Build for Lambda x86_64 (most common)
mkdir -p build-x86_64/
pip install --platform manylinux2014_x86_64 --only-binary=:all: \
    --python-version 3.13 --target build-x86_64/ \
    -r requirements.txt

# Build for Lambda ARM64 (Graviton2)
mkdir -p build-arm64/
pip install --platform manylinux2014_aarch64 --only-binary=:all: \
    --python-version 3.13 --target build-arm64/ \
    -r requirements.txt

# Copy application code to both builds
cp app_poetry.py build-x86_64/
cp app_poetry.py build-arm64/

# Create deployment packages
cd build-x86_64 && zip -r ../lambda-poetry-x86_64.zip . && cd ..
cd build-arm64 && zip -r ../lambda-poetry-arm64.zip . && cd ..

# Cleanup
rm requirements.txt

echo "✅ x86_64 package: lambda-poetry-x86_64.zip"
echo "✅ ARM64 package: lambda-poetry-arm64.zip"

Poetry build methods comparison

Method Cross-platform Safe Speed Reproducibility Recommendation
Poetry + pip ✅ Yes Fast High ✅ Recommended
Poetry native ❌ No Fastest Medium ⚠️ Development only
Poetry + Docker ✅ Yes Slower Highest ✅ Complex dependencies
Poetry best practices for Lambda
  • Always use poetry export to generate requirements.txt for deployment
  • Use --without-hashes flag to avoid pip compatibility issues
  • Combine with pip install --platform for cross-platform builds
  • Keep poetry.lock in version control for reproducible builds

Poetry with Docker for consistent builds

Use Docker to ensure consistent builds across different development environments and avoid platform-specific dependency issues.

Dockerfile.poetry
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
#Public Lambda image
FROM public.ecr.aws/lambda/python@sha256:7e7f098baa11a527fbe59f33f4ed032a36b6e87b22ea73da1175522095885f74

# Set workdir file
WORKDIR /tmp/app

# Copy poetry files
COPY pyproject.toml poetry.lock ./

# Configure poetry and install dependencies
RUN poetry config virtualenvs.create false \
    pip install poetry \
    poetry install --only=main --no-root

# Copy application code
COPY app_poetry.py ./

CMD ["app_poetry.lambda_handler"]
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
#!/bin/bash

# Build Docker image
docker build -t lambda-powertools-app -f Dockerfile.poetry .

# Create container and extract files
docker create --name temp-container lambda-powertools-app
docker cp temp-container:/var/task ./build
docker rm temp-container

# Create deployment package
cd build && zip -r ../lambda-docker.zip . && cd ..

echo "✅ Docker-based deployment package created: lambda-docker.zip"

SAM

AWS SAM (Serverless Application Model) is AWS's framework for building serverless applications using CloudFormation templates. It provides local testing capabilities, built-in best practices, and seamless integration with AWS services, making it the go-to choice for AWS-native serverless development.

SAM automatically resolves multi-architecture compatibility issues by building functions inside Lambda-compatible containers (--use-container flag), ensuring dependencies are installed with the correct architecture and glibc versions for the Lambda runtime environment. This eliminates the common problem of architecture mismatches when building on macOS/Windows.

Learn more at AWS SAM documentation.

SAM without Layers (All-in-one package)

Simple approach where all dependencies are packaged with the function code:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31

Globals:
  Function:
    Runtime: python3.13
    Timeout: 30
    MemorySize: 512
    Environment:
      Variables:
        POWERTOOLS_SERVICE_NAME: !Ref AWS::StackName
        POWERTOOLS_METRICS_NAMESPACE: MyApp
        POWERTOOLS_LOG_LEVEL: INFO

Resources:
  # Single Lambda Function with all dependencies included
  ApiFunction:
    Type: AWS::Serverless::Function
    Properties:
      CodeUri: src/
      Handler: app_sam_no_layer.lambda_handler
      Events:
        ApiEvent:
          Type: Api
          Properties:
            Path: /{proxy+}
            Method: ANY
      Environment:
        Variables:
          POWERTOOLS_SERVICE_NAME: api-service

Outputs:
  ApiUrl:
    Description: API Gateway endpoint URL
    Value: !Sub "https://${ServerlessRestApi}.execute-api.${AWS::Region}.amazonaws.com/Prod/"
1
2
3
aws-lambda-powertools[all]==3.18.0
pydantic==2.10.4
requests>=2.32.4
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
from typing import Optional

from pydantic import BaseModel

from aws_lambda_powertools import Logger, Metrics, Tracer
from aws_lambda_powertools.event_handler import APIGatewayRestResolver
from aws_lambda_powertools.logging import correlation_paths
from aws_lambda_powertools.metrics import MetricUnit

logger = Logger()
tracer = Tracer()
metrics = Metrics()
app = APIGatewayRestResolver()


class UserModel(BaseModel):
    name: str
    email: str
    age: Optional[int] = None


@app.get("/health")
def health_check():
    return {"status": "healthy", "service": "powertools-sam"}


@app.post("/users")
def create_user(user: UserModel):
    logger.info("Creating user", extra={"user": user.model_dump()})
    metrics.add_metric(name="UserCreated", unit=MetricUnit.Count, value=1)
    return {"message": f"User {user.name} created successfully"}


@logger.inject_lambda_context(correlation_id_path=correlation_paths.API_GATEWAY_REST)
@tracer.capture_lambda_handler
@metrics.log_metrics(capture_cold_start_metric=True)
def lambda_handler(event, context):
    return app.resolve(event, context)
1
2
3
4
5
6
7
8
9
#!/bin/bash

echo "🏗️  Building SAM application without layers..."

# Build and deploy (SAM will handle dependency installation)
sam build --use-container
sam deploy --guided

echo "✅ SAM application deployed successfully (no layers)"

SAM with Layers (Optimized approach)

Optimized approach using Lambda Layers to separate dependencies from application code. This example demonstrates:

  • Public Powertools for AWS Lambda layer - Uses AWS-managed layer ARN for better performance and maintenance
  • Custom dependencies layer - Separates application-specific dependencies
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31

Globals:
  Function:
    Runtime: python3.13
    Timeout: 30
    MemorySize: 512
    Environment:
      Variables:
        POWERTOOLS_SERVICE_NAME: !Ref AWS::StackName
        POWERTOOLS_METRICS_NAMESPACE: MyApp
        POWERTOOLS_LOG_LEVEL: INFO

Resources:
  # Dependencies Layer (pydantic, requests, etc.)
  DependenciesLayer:
    Type: AWS::Serverless::LayerVersion
    Properties:
      LayerName: !Sub "${AWS::StackName}-dependencies"
      Description: Application dependencies
      ContentUri: layers/dependencies/
      CompatibleRuntimes:
        - python3.13
      RetentionPolicy: Delete

  # API Lambda Function (lightweight - only app code)
  ApiFunction:
    Type: AWS::Serverless::Function
    Properties:
      CodeUri: src/app/
      Handler: app_sam_layer.lambda_handler
      Layers:
        - arn:aws:lambda:us-east-1:017000801446:layer:AWSLambdaPowertoolsPythonV3-python313-x86_64:21
        - !Ref DependenciesLayer
      Events:
        ApiEvent:
          Type: Api
          Properties:
            Path: /{proxy+}
            Method: ANY
      Environment:
        Variables:
          POWERTOOLS_SERVICE_NAME: api-service

  # Background Worker Function
  WorkerFunction:
    Type: AWS::Serverless::Function
    Properties:
      CodeUri: src/worker/
      Handler: worker_sam_layer.lambda_handler
      Layers:
        - arn:aws:lambda:us-east-1:017000801446:layer:AWSLambdaPowertoolsPythonV3-python313-x86_64:21
        - !Ref DependenciesLayer
      Events:
        SQSEvent:
          Type: SQS
          Properties:
            Queue: !GetAtt WorkerQueue.Arn
            BatchSize: 10
            FunctionResponseTypes:
              - ReportBatchItemFailures
      Environment:
        Variables:
          POWERTOOLS_SERVICE_NAME: worker-service

  # SQS Queue for worker
  WorkerQueue:
    Type: AWS::SQS::Queue
    Properties:
      QueueName: !Sub "${AWS::StackName}-worker-queue"
      VisibilityTimeout: 180

Outputs:
  ApiUrl:
    Description: API Gateway endpoint URL
    Value: !Sub "https://${ServerlessRestApi}.execute-api.${AWS::Region}.amazonaws.com/Prod/"

  WorkerQueueUrl:
    Description: SQS Queue URL for worker
    Value: !Ref WorkerQueue
1
2
pydantic==2.10.4
requests>=2.32.4
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
from typing import Optional

import requests
from pydantic import BaseModel

from aws_lambda_powertools import Logger, Metrics, Tracer
from aws_lambda_powertools.event_handler import APIGatewayRestResolver
from aws_lambda_powertools.logging import correlation_paths
from aws_lambda_powertools.metrics import MetricUnit

logger = Logger()
tracer = Tracer()
metrics = Metrics()
app = APIGatewayRestResolver()


class UserModel(BaseModel):
    name: str
    email: str
    age: Optional[int] = None


@app.get("/health")
def health_check():
    return {"status": "healthy", "service": "powertools-sam-layers"}


@app.post("/users")
def create_user(user: UserModel):
    logger.info("Creating user", extra={"user": user.model_dump()})
    metrics.add_metric(name="UserCreated", unit=MetricUnit.Count, value=1)
    return {"message": f"User {user.name} created successfully"}


@app.get("/external")
@tracer.capture_method
def fetch_external_data():
    """Example using requests from dependencies layer"""
    response = requests.get("https://httpbin.org/json")
    data = response.json()

    metrics.add_metric(name="ExternalApiCalled", unit=MetricUnit.Count, value=1)
    return {"external_data": data}


@logger.inject_lambda_context(correlation_id_path=correlation_paths.API_GATEWAY_REST)
@tracer.capture_lambda_handler
@metrics.log_metrics(capture_cold_start_metric=True)
def lambda_handler(event, context):
    return app.resolve(event, context)
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
from __future__ import annotations

import json
from typing import Any

from pydantic import BaseModel, ValidationError

from aws_lambda_powertools import Logger, Metrics, Tracer
from aws_lambda_powertools.utilities.batch import BatchProcessor, EventType, process_partial_response
from aws_lambda_powertools.utilities.typing import LambdaContext

logger = Logger()
tracer = Tracer()
metrics = Metrics()

# Initialize batch processor for SQS
processor = BatchProcessor(event_type=EventType.SQS)


class WorkerMessage(BaseModel):
    task_id: str
    task_type: str
    payload: dict


@tracer.capture_method
def record_handler(record):
    """Process individual SQS record"""
    try:
        # Parse and validate message
        message_data = json.loads(record.body)
        worker_message = WorkerMessage(**message_data)

        logger.info("Processing task", extra={"task_id": worker_message.task_id, "task_type": worker_message.task_type})

        # Simulate work based on task type
        if worker_message.task_type == "email":
            # Process email task
            logger.info("Sending email", extra={"task_id": worker_message.task_id})
        elif worker_message.task_type == "report":
            # Process report task
            logger.info("Generating report", extra={"task_id": worker_message.task_id})
        else:
            logger.warning("Unknown task type", extra={"task_type": worker_message.task_type})

        metrics.add_metric(name="TaskProcessed", unit="Count", value=1)
        metrics.add_metadata(key="task_type", value=worker_message.task_type)

        return {"status": "success", "task_id": worker_message.task_id}

    except ValidationError as e:
        logger.error("Invalid message format", extra={"error": str(e)})
        metrics.add_metric(name="TaskFailed", unit="Count", value=1)
        raise
    except Exception as e:
        logger.error("Task processing failed", extra={"error": str(e)})
        metrics.add_metric(name="TaskFailed", unit="Count", value=1)
        raise


@logger.inject_lambda_context
@tracer.capture_lambda_handler
@metrics.log_metrics
def lambda_handler(event: dict[str, Any], context: LambdaContext):
    """Process SQS messages using BatchProcessor"""

    return process_partial_response(
        event=event,
        record_handler=record_handler,
        processor=processor,
        context=context,
    )
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
version = 0.1

[default.global.parameters]
stack_name = "powertools-lambda-app"

[default.build.parameters]
cached = true
parallel = true

[default.deploy.parameters]
capabilities = "CAPABILITY_IAM"
confirm_changeset = true
resolve_s3 = true
region = "us-east-1"

[default.package.parameters]
resolve_s3 = true

[default.sync.parameters]
watch = true

[default.local_start_api.parameters]
warm_containers = "EAGER"

[default.local_start_lambda.parameters]
warm_containers = "EAGER"
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
#!/bin/bash

echo "🏗️  Building SAM application with layers..."

# Build Dependencies layer (Powertools uses public layer ARN)
echo "Building Dependencies layer..."
mkdir -p layers/dependencies/python
pip install pydantic requests -t layers/dependencies/python/

# Optimize layers (remove unnecessary files)
echo "Optimizing layers..."
find layers/ -name "*.pyc" -delete
find layers/ -name "__pycache__" -type d -exec rm -rf {} + 2>/dev/null || true
find layers/ -name "tests" -type d -exec rm -rf {} + 2>/dev/null || true
find layers/ -name "*.dist-info" -type d -exec rm -rf {} + 2>/dev/null || true

# Build and deploy
sam build --use-container
sam deploy --guided

echo "✅ SAM application with layers deployed successfully"

# Show layer sizes
echo ""
echo "📊 Layer sizes:"
echo "Powertools: Using public layer ARN (no local build needed)"
du -sh layers/dependencies/

Comparison: with vs without Layers

Aspect Without Layers With Layers
Deployment Speed Slower (uploads all deps each time) Faster (layers cached, only app code changes)
Package Size Larger function packages Smaller function packages
Cold Start Slightly faster (everything in one place) Slightly slower (layer loading overhead)
Reusability No sharing between functions Layers shared across functions
Complexity Simple, single package More complex, multiple components
Best For Single function, simple apps Multiple functions, shared dependencies

Advanced SAM with multiple environments

Configure different environments (dev, staging, prod) with environment-specific settings and layer references. This example demonstrates how to use parameters, mappings, and conditions to create flexible, multi-environment deployments.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31

Parameters:
  Environment:
    Type: String
    Default: dev
    AllowedValues: [dev, staging, prod]
    Description: Environment name

  LogLevel:
    Type: String
    Default: INFO
    AllowedValues: [DEBUG, INFO, WARNING, ERROR]
    Description: Log level for Lambda functions

Mappings:
  EnvironmentMap:
    dev:
      MemorySize: 256
      Timeout: 30
    staging:
      MemorySize: 512
      Timeout: 60
    prod:
      MemorySize: 1024
      Timeout: 120

Globals:
  Function:
    Runtime: python3.13
    MemorySize: !FindInMap [EnvironmentMap, !Ref Environment, MemorySize]
    Timeout: !FindInMap [EnvironmentMap, !Ref Environment, Timeout]
    Environment:
      Variables:
        ENVIRONMENT: !Ref Environment
        POWERTOOLS_SERVICE_NAME: !Sub "${AWS::StackName}-${Environment}"
        POWERTOOLS_METRICS_NAMESPACE: !Sub "MyApp/${Environment}"
        POWERTOOLS_LOG_LEVEL: !Ref LogLevel
        POWERTOOLS_DEV: !If [IsDev, "true", "false"]

Conditions:
  IsDev: !Equals [!Ref Environment, "dev"]
  IsProd: !Equals [!Ref Environment, "prod"]

Resources:
  # Dependencies Layer for application dependencies
  DependenciesLayer:
    Type: AWS::Serverless::LayerVersion
    Properties:
      LayerName: !Sub "${AWS::StackName}-${Environment}-dependencies"
      Description: !Sub "Application dependencies for ${Environment}"
      ContentUri: layers/dependencies/
      CompatibleRuntimes:
        - python3.13
      RetentionPolicy: Delete

  ApiFunction:
    Type: AWS::Serverless::Function
    Properties:
      CodeUri: src/
      Handler: app.lambda_handler
      Layers:
        - arn:aws:lambda:us-east-1:017000801446:layer:AWSLambdaPowertoolsPythonV3-python313-x86_64:1
        - !Ref DependenciesLayer
      Events:
        ApiEvent:
          Type: Api
          Properties:
            Path: /{proxy+}
            Method: ANY
      Environment:
        Variables:
          TABLE_NAME: !Ref DynamoTable

  DynamoTable:
    Type: AWS::DynamoDB::Table
    Properties:
      TableName: !Sub "${AWS::StackName}-${Environment}-data"
      BillingMode: !If [IsProd, "PROVISIONED", "PAY_PER_REQUEST"]
      AttributeDefinitions:
        - AttributeName: pk
          AttributeType: S
      KeySchema:
        - AttributeName: pk
          KeyType: HASH
      ProvisionedThroughput: !If
        - IsProd
        - ReadCapacityUnits: 5
          WriteCapacityUnits: 5
        - !Ref AWS::NoValue

CDK

The AWS CDK (Cloud Development Kit) allows you to define cloud infrastructure using familiar programming languages like Python, TypeScript, or Java. It provides type safety, IDE support, and the ability to create reusable constructs, making it perfect for complex infrastructure requirements and teams that prefer code over YAML.

Learn more at AWS CDK documentation.

Basic CDK setup with Python

CDK uses the concept of Apps, Stacks, and Constructs to organize infrastructure. A CDK app contains one or more stacks, and each stack contains constructs that represent AWS resources.

Project structure

1
2
3
4
5
6
7
8
my-lambda-cdk/
├── app.py                 # CDK app entry point
├── cdk.json              # CDK configuration
├── requirements.txt      # CDK dependencies
├── src/
│   └── lambda_function.py # Lambda function code
└── stacks/
    └── lambda_stack.py   # Stack definition (optional)

Key CDK concepts for Lambda

Concept Description Lambda Usage
App Root construct, contains stacks Entry point for your Lambda infrastructure
Stack Unit of deployment Groups related Lambda functions and resources
Construct Reusable cloud component Lambda function, API Gateway, DynamoDB table
Asset Local files bundled with deployment Lambda function code, layers

Prerequisites

Before starting, ensure you have:

1
2
3
4
5
6
7
8
9
#!/bin/bash
# Install AWS CDK CLI
npm install -g aws-cdk

# Verify installation
cdk --version

# Bootstrap CDK in your AWS account (one-time setup)
cdk bootstrap aws://ACCOUNT-ID/REGION

Basic implementation

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
#!/usr/bin/env python3
import aws_cdk as cdk
from aws_cdk import (
    Duration,
    Stack,
)
from aws_cdk import (
    aws_apigateway as apigateway,
)
from aws_cdk import (
    aws_lambda as _lambda,
)
from aws_cdk import (
    aws_logs as logs,
)
from constructs import Construct


class PowertoolsLambdaStack(Stack):
    def __init__(self, scope: Construct, construct_id: str, **kwargs) -> None:
        super().__init__(scope, construct_id, **kwargs)

        # Use public Powertools layer
        powertools_layer = _lambda.LayerVersion.from_layer_version_arn(
            self,
            "PowertoolsLayer",
            layer_version_arn="arn:aws:lambda:us-east-1:017000801446:layer:AWSLambdaPowertoolsPythonV3-python313-x86_64:1",
        )

        # Lambda Function
        api_function = _lambda.Function(
            self,
            "ApiFunction",
            runtime=_lambda.Runtime.PYTHON_3_13,
            handler="lambda_function.lambda_handler",
            code=_lambda.Code.from_asset("src"),
            layers=[powertools_layer],
            timeout=Duration.seconds(30),
            memory_size=512,
            environment={
                "POWERTOOLS_SERVICE_NAME": "api-service",
                "POWERTOOLS_METRICS_NAMESPACE": "MyApp",
                "POWERTOOLS_LOG_LEVEL": "INFO",
            },
            log_retention=logs.RetentionDays.ONE_WEEK,
        )

        # API Gateway
        api = apigateway.RestApi(
            self,
            "ApiGateway",
            rest_api_name="Powertools API",
            description="API powered by Lambda with Powertools",
        )

        # API Integration
        integration = apigateway.LambdaIntegration(api_function)
        api.root.add_proxy(
            default_integration=integration,
            any_method=True,
        )

        # Outputs
        cdk.CfnOutput(
            self,
            "ApiUrl",
            value=api.url,
            description="API Gateway URL",
        )


app = cdk.App()
PowertoolsLambdaStack(app, "PowertoolsLambdaStack")
app.synth()
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
{
  "app": "python app.py",
  "watch": {
    "include": [
      "**"
    ],
    "exclude": [
      "README.md",
      "cdk*.json",
      "requirements*.txt",
      "source.bat",
      "**/__pycache__",
      "**/.venv"
    ]
  },
  "context": {
    "@aws-cdk/aws-lambda:recognizeLayerVersion": true,
    "@aws-cdk/core:checkSecretUsage": true,
    "@aws-cdk/core:target-partitions": ["aws", "aws-cn"],
    "@aws-cdk-containers/ecs-service-extensions:enableDefaultLogDriver": true,
    "@aws-cdk/aws-ec2:uniqueImdsv2TemplateName": true,
    "@aws-cdk/aws-ecs:arnFormatIncludesClusterName": true,
    "@aws-cdk/core:validateSnapshotRemovalPolicy": true,
    "@aws-cdk/aws-codepipeline:crossAccountKeyAliasStackSafeResourceName": true,
    "@aws-cdk/aws-s3:createDefaultLoggingPolicy": true,
    "@aws-cdk/aws-sns-subscriptions:restrictSqsDescryption": true,
    "@aws-cdk/aws-apigateway:disableCloudWatchRole": true,
    "@aws-cdk/core:enablePartitionLiterals": true,
    "@aws-cdk/aws-events:eventsTargetQueueSameAccount": true,
    "@aws-cdk/aws-iam:minimizePolicies": true,
    "@aws-cdk/core:validateSnapshotRemovalPolicy": true,
    "@aws-cdk/aws-codepipeline:crossAccountKeysDefaultValueToFalse": true,
    "@aws-cdk/aws-s3:serverAccessLogsUseBucketPolicy": true,
    "@aws-cdk/aws-route53-patters:useCertificate": true,
    "@aws-cdk/customresources:installLatestAwsSdkDefault": false
  }
}
1
2
aws-cdk-lib>=2.100.0
constructs>=10.0.0
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
from aws_lambda_powertools import Logger, Metrics, Tracer
from aws_lambda_powertools.event_handler import APIGatewayRestResolver
from aws_lambda_powertools.logging import correlation_paths
from aws_lambda_powertools.metrics import MetricUnit

logger = Logger()
tracer = Tracer()
metrics = Metrics()
app = APIGatewayRestResolver()


@app.get("/health")
def health_check():
    return {"status": "healthy", "service": "powertools-cdk"}


@app.get("/metrics")
def get_metrics():
    metrics.add_metric(name="MetricsEndpointCalled", unit=MetricUnit.Count, value=1)
    return {"message": "Metrics recorded"}


@logger.inject_lambda_context(correlation_id_path=correlation_paths.API_GATEWAY_REST)
@tracer.capture_lambda_handler
@metrics.log_metrics(capture_cold_start_metric=True)
def lambda_handler(event, context):
    return app.resolve(event, context)
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
#!/bin/bash

echo "🏗️  Building CDK application..."

# Install CDK dependencies
pip install -r requirements.txt

# Bootstrap CDK (first time only)
# cdk bootstrap

# Deploy stack
cdk deploy --require-approval never

echo "✅ CDK application deployed successfully"

CDK bundling options

CDK provides several ways to handle Lambda function dependencies:

Method Description Best For
Inline bundling CDK bundles dependencies automatically Simple functions with few dependencies
Docker bundling Uses Docker for consistent builds Complex dependencies, cross-platform builds
Pre-built assets Upload pre-packaged ZIP files Custom build processes, CI/CD integration
Lambda Layers Separate dependencies from code Shared dependencies across functions

Common CDK commands

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
#!/bin/bash
# Install Python dependencies
pip install -r requirements.txt

# Synthesize CloudFormation template
cdk synth

# Deploy stack
cdk deploy

# Deploy specific stack
cdk deploy MyLambdaStack

# Destroy stack
cdk destroy

# List all stacks
cdk list

# Compare deployed stack with current state
cdk diff

Advanced CDK with multiple stacks

Multi-environment CDK setup with separate stacks, DynamoDB integration, and SQS message processing using BatchProcessor.

  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
from aws_cdk import (
    Duration,
    RemovalPolicy,
    Stack,
)
from aws_cdk import (
    aws_apigateway as apigateway,
)
from aws_cdk import (
    aws_dynamodb as dynamodb,
)
from aws_cdk import (
    aws_lambda as _lambda,
)
from aws_cdk import (
    aws_lambda_event_sources as lambda_event_sources,
)
from aws_cdk import (
    aws_sqs as sqs,
)
from constructs import Construct


class PowertoolsStack(Stack):
    def __init__(self, scope: Construct, construct_id: str, environment: str = "dev", **kwargs) -> None:
        super().__init__(scope, construct_id, **kwargs)

        self.env = environment

        # Shared Powertools Layer (using public layer)
        self.powertools_layer = self._create_powertools_layer()

        # DynamoDB Table
        self.table = self._create_dynamodb_table()

        # SQS Queue
        self.queue = self._create_sqs_queue()

        # Lambda Functions
        self.api_function = self._create_api_function()
        self.worker_function = self._create_worker_function()

        # API Gateway
        self.api = self._create_api_gateway()

    def _create_powertools_layer(self) -> _lambda.ILayerVersion:
        return _lambda.LayerVersion.from_layer_version_arn(
            self,
            "PowertoolsLayer",
            layer_version_arn="arn:aws:lambda:us-east-1:017000801446:layer:AWSLambdaPowertoolsPythonV3-python313-x86_64:1",
        )

    def _create_dynamodb_table(self) -> dynamodb.Table:
        return dynamodb.Table(
            self,
            "DataTable",
            table_name=f"powertools-{self.env}-data",
            partition_key=dynamodb.Attribute(name="pk", type=dynamodb.AttributeType.STRING),
            billing_mode=dynamodb.BillingMode.PAY_PER_REQUEST,
            removal_policy=RemovalPolicy.DESTROY if self.env != "prod" else RemovalPolicy.RETAIN,
        )

    def _create_sqs_queue(self) -> sqs.Queue:
        return sqs.Queue(
            self,
            "WorkerQueue",
            queue_name=f"powertools-{self.env}-worker",
            visibility_timeout=Duration.seconds(180),
        )

    def _create_api_function(self) -> _lambda.Function:
        function = _lambda.Function(
            self,
            "ApiFunction",
            runtime=_lambda.Runtime.PYTHON_3_13,
            handler="app.lambda_handler",
            code=_lambda.Code.from_asset("src/app"),
            layers=[self.powertools_layer],
            timeout=Duration.seconds(30),
            memory_size=512 if self.env == "prod" else 256,
            environment={
                "ENVIRONMENT": self.env,
                "POWERTOOLS_SERVICE_NAME": f"app-{self.env}",
                "POWERTOOLS_METRICS_NAMESPACE": f"MyApp/{self.env}",
                "POWERTOOLS_LOG_LEVEL": "INFO" if self.env == "prod" else "DEBUG",
                "TABLE_NAME": self.table.table_name,
                "QUEUE_URL": self.queue.queue_url,
            },
        )

        # Grant permissions
        self.table.grant_read_write_data(function)
        self.queue.grant_send_messages(function)

        return function

    def _create_worker_function(self) -> _lambda.Function:
        function = _lambda.Function(
            self,
            "WorkerFunction",
            runtime=_lambda.Runtime.PYTHON_3_13,
            handler="worker.lambda_handler",
            code=_lambda.Code.from_asset("src/worker"),
            layers=[self.powertools_layer],
            timeout=Duration.seconds(120),
            memory_size=1024 if self.env == "prod" else 512,
            environment={
                "ENVIRONMENT": self.env,
                "POWERTOOLS_SERVICE_NAME": f"worker-{self.env}",
                "POWERTOOLS_METRICS_NAMESPACE": f"MyApp/{self.env}",
                "POWERTOOLS_LOG_LEVEL": "INFO" if self.env == "prod" else "DEBUG",
                "TABLE_NAME": self.table.table_name,
            },
        )

        # Add SQS event source with partial failure support
        function.add_event_source(
            lambda_event_sources.SqsEventSource(
                self.queue,
                batch_size=10,
                report_batch_item_failures=True,
            ),
        )

        # Grant permissions
        self.table.grant_read_write_data(function)

        return function

    def _create_api_gateway(self) -> apigateway.RestApi:
        api = apigateway.RestApi(
            self,
            "ApiGateway",
            rest_api_name=f"Powertools API - {self.env}",
            description=f"API for {self.env} environment",
        )

        integration = apigateway.LambdaIntegration(self.api_function)
        api.root.add_proxy(
            default_integration=integration,
            any_method=True,
        )

        return api
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
{
  "app": "python app_multi_stack.py",
  "watch": {
    "include": [
      "**"
    ],
    "exclude": [
      "README.md",
      "cdk*.json",
      "requirements*.txt",
      "source.bat",
      "**/__pycache__",
      "**/.venv"
    ]
  },
  "context": {
    "@aws-cdk/aws-lambda:recognizeLayerVersion": true,
    "@aws-cdk/core:checkSecretUsage": true,
    "@aws-cdk/core:target-partitions": ["aws", "aws-cn"],
    "@aws-cdk-containers/ecs-service-extensions:enableDefaultLogDriver": true,
    "@aws-cdk/aws-ec2:uniqueImdsv2TemplateName": true,
    "@aws-cdk/aws-ecs:arnFormatIncludesClusterName": true,
    "@aws-cdk/core:validateSnapshotRemovalPolicy": true,
    "@aws-cdk/aws-codepipeline:crossAccountKeyAliasStackSafeResourceName": true,
    "@aws-cdk/aws-s3:createDefaultLoggingPolicy": true,
    "@aws-cdk/aws-sns-subscriptions:restrictSqsDescryption": true,
    "@aws-cdk/aws-apigateway:disableCloudWatchRole": true,
    "@aws-cdk/core:enablePartitionLiterals": true,
    "@aws-cdk/aws-events:eventsTargetQueueSameAccount": true,
    "@aws-cdk/aws-iam:minimizePolicies": true,
    "@aws-cdk/core:validateSnapshotRemovalPolicy": true,
    "@aws-cdk/aws-codepipeline:crossAccountKeysDefaultValueToFalse": true,
    "@aws-cdk/aws-s3:serverAccessLogsUseBucketPolicy": true,
    "@aws-cdk/aws-route53-patters:useCertificate": true,
    "@aws-cdk/customresources:installLatestAwsSdkDefault": false
  }
}
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
#!/usr/bin/env python3
import aws_cdk as cdk
from stacks.powertools_cdk_stack import PowertoolsStack

app = cdk.App()

# Get environment from context or default to dev
environment = app.node.try_get_context("environment") or "dev"

# Create stack for the specified environment
PowertoolsStack(
    app,
    f"PowertoolsStack-{environment}",
    environment=environment,
    env=cdk.Environment(
        account=app.node.try_get_context("account"),
        region=app.node.try_get_context("region") or "us-east-1",
    ),
)

app.synth()
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
import os

import boto3

from aws_lambda_powertools import Logger, Metrics, Tracer
from aws_lambda_powertools.event_handler import APIGatewayRestResolver
from aws_lambda_powertools.logging import correlation_paths
from aws_lambda_powertools.metrics import MetricUnit

logger = Logger()
tracer = Tracer()
metrics = Metrics()
app = APIGatewayRestResolver()

# Initialize AWS clients
dynamodb = boto3.resource("dynamodb")
sqs = boto3.client("sqs")

table = dynamodb.Table(os.environ["TABLE_NAME"])
queue_url = os.environ["QUEUE_URL"]


@app.get("/health")
def health_check():
    return {"status": "healthy", "service": "powertools-cdk-api"}


@app.post("/tasks")
@tracer.capture_method
def create_task():
    task_data = app.current_event.json_body

    # Store in DynamoDB
    table.put_item(Item={"pk": task_data["task_id"], "task_type": task_data["task_type"], "status": "pending"})

    # Send to SQS for processing
    sqs.send_message(QueueUrl=queue_url, MessageBody=app.current_event.body)

    metrics.add_metric(name="TaskCreated", unit=MetricUnit.Count, value=1)
    logger.info("Task created", extra={"task_id": task_data["task_id"]})

    return {"message": "Task created successfully", "task_id": task_data["task_id"]}


@logger.inject_lambda_context(correlation_id_path=correlation_paths.API_GATEWAY_REST)
@tracer.capture_lambda_handler
@metrics.log_metrics(capture_cold_start_metric=True)
def lambda_handler(event, context):
    return app.resolve(event, context)
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
from __future__ import annotations

import json
import os
from typing import Any

import boto3

from aws_lambda_powertools import Logger, Metrics, Tracer
from aws_lambda_powertools.utilities.batch import BatchProcessor, EventType, process_partial_response
from aws_lambda_powertools.utilities.typing import LambdaContext

logger = Logger()
tracer = Tracer()
metrics = Metrics()

# Initialize batch processor for SQS
processor = BatchProcessor(event_type=EventType.SQS)

# Initialize AWS clients
dynamodb = boto3.resource("dynamodb")
table = dynamodb.Table(os.environ["TABLE_NAME"])


@tracer.capture_method
def record_handler(record):
    """Process individual SQS record"""
    try:
        # Parse message
        message_data = json.loads(record.body)
        task_id = message_data["task_id"]
        task_type = message_data["task_type"]

        logger.info("Processing task", extra={"task_id": task_id, "task_type": task_type})

        # Update task status in DynamoDB
        table.update_item(
            Key={"pk": task_id},
            UpdateExpression="SET #status = :status",
            ExpressionAttributeNames={"#status": "status"},
            ExpressionAttributeValues={":status": "processing"},
        )

        # Simulate work based on task type
        if task_type == "email":
            logger.info("Sending email", extra={"task_id": task_id})
        elif task_type == "report":
            logger.info("Generating report", extra={"task_id": task_id})
        else:
            logger.warning("Unknown task type", extra={"task_type": task_type})

        # Mark as completed
        table.update_item(
            Key={"pk": task_id},
            UpdateExpression="SET #status = :status",
            ExpressionAttributeNames={"#status": "status"},
            ExpressionAttributeValues={":status": "completed"},
        )

        metrics.add_metric(name="TaskProcessed", unit="Count", value=1)
        metrics.add_metadata(key="task_type", value=task_type)

        return {"status": "success", "task_id": task_id}

    except Exception as e:
        logger.error("Task processing failed", extra={"error": str(e)})
        metrics.add_metric(name="TaskFailed", unit="Count", value=1)
        raise


@logger.inject_lambda_context
@tracer.capture_lambda_handler
@metrics.log_metrics
def lambda_handler(event: dict[str, Any], context: LambdaContext):
    """Process SQS messages using BatchProcessor"""

    return process_partial_response(
        event=event,
        record_handler=record_handler,
        processor=processor,
        context=context,
    )
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
#!/bin/bash

# Deploy to different environments
environments=("dev" "staging" "prod")

for env in "${environments[@]}"; do
    echo "🚀 Deploying to $env environment..."

    cdk deploy PowertoolsStack-$env \
        --context environment=$env \
        --require-approval never

    echo "✅ $env deployment completed"
done

Pants

Pants is a powerful build system designed for large codebases and monorepos. It provides incremental builds, dependency inference, and advanced caching mechanisms. Ideal for organizations with complex Python projects that need fine-grained build control and optimization.

Setup

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
[GLOBAL]
pants_version = "2.21.0"
backend_packages = [
    "pants.backend.python",
    "pants.backend.python.lint.black",
    "pants.backend.python.lint.flake8",
    "pants.backend.python.typecheck.mypy",
]

[python]
interpreter_constraints = [">=3.9,<3.14"]

[python-infer]
use_rust_parser = true

[source]
root_patterns = ["/"]
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
python_sources(
    name="lambda_sources",
    sources=["*.py"],
)

python_requirement(
    name="aws-lambda-powertools",
    requirements=["aws-lambda-powertools[all]==3.18.0"],
)

python_requirement(
    name="pydantic",
    requirements=["pydantic==2.10.4"],
)

python_requirement(
    name="requests",
    requirements=["requests>=2.32.4"],
)

pex_binary(
    name="lambda_function",
    entry_point="app.py:lambda_handler",
    dependencies=[
        ":lambda_sources",
        ":aws-lambda-powertools",
        ":pydantic",
        ":requests",
    ],
    platforms=["linux_x86_64-cp-39-cp39"],
)
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
from __future__ import annotations

from typing import Any

import requests
from pydantic import BaseModel

from aws_lambda_powertools import Logger, Metrics, Tracer
from aws_lambda_powertools.event_handler import APIGatewayRestResolver
from aws_lambda_powertools.utilities.typing import LambdaContext

logger = Logger()
tracer = Tracer()
metrics = Metrics()
app = APIGatewayRestResolver()


class TodoItem(BaseModel):
    id: int
    title: str
    completed: bool = False
    user_id: int | None = None


@app.get("/todos")
@tracer.capture_method
def get_todos() -> TodoItem:
    """Fetch todos from external API"""
    logger.info("Fetching todos from external API")

    response = requests.get("https://jsonplaceholder.typicode.com/todos")
    response.raise_for_status()

    return response.json()[0]


@logger.inject_lambda_context
@tracer.capture_lambda_handler
@metrics.log_metrics
def lambda_handler(event: dict[str, Any], context: LambdaContext):
    return app.resolve(event, context)
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
#!/bin/bash

# Build the PEX binary
pants package :lambda_function

# The PEX file is created in dist/
# Rename it to a more descriptive name
mv dist/lambda_function.pex lambda-pants.pex

# For Lambda deployment, we need to extract the PEX
mkdir -p build/
cd build/

# Extract PEX contents
python ../lambda-pants.pex --pex-root . --pex-path . -c "import sys; sys.exit(0)"

# Create deployment zip
zip -r ../lambda-pants.zip .
cd ..

echo "✅ Pants deployment package created: lambda-pants.zip"
echo "✅ Pants PEX binary created: lambda-pants.pex"

Advanced Pants with multiple targets

Pants excels at managing complex projects with multiple Lambda functions that share dependencies. This approach provides significant benefits for monorepo architectures and microservices.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
# Shared dependencies
python_requirement(
    name="powertools",
    requirements=["aws-lambda-powertools[all]==3.18.0"],
)

# API Lambda function
python_sources(
    name="api_sources",
    sources=["api/*.py"],
)

pex_binary(
    name="api_lambda",
    entry_point="api/handler.py:lambda_handler",
    dependencies=[":api_sources", ":powertools"],
    platforms=["linux_x86_64-cp-39-cp39"],
)

# Worker Lambda function
python_sources(
    name="worker_sources",
    sources=["worker/*.py"],
)

pex_binary(
    name="worker_lambda",
    entry_point="worker/handler.py:lambda_handler",
    dependencies=[":worker_sources", ":powertools"],
    platforms=["linux_x86_64-cp-39-cp39"],
)
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
from aws_lambda_powertools import Logger, Metrics, Tracer
from aws_lambda_powertools.event_handler import APIGatewayRestResolver
from aws_lambda_powertools.logging import correlation_paths
from aws_lambda_powertools.metrics import MetricUnit

logger = Logger()
tracer = Tracer()
metrics = Metrics()
app = APIGatewayRestResolver()


@app.get("/health")
def health_check():
    return {"status": "healthy", "service": "powertools-pants-api"}


@app.get("/metrics")
def get_metrics():
    metrics.add_metric(name="MetricsEndpointCalled", unit=MetricUnit.Count, value=1)
    return {"message": "Metrics recorded"}


@app.post("/tasks")
def create_task():
    task_data = app.current_event.json_body
    logger.info("Task created", extra={"task": task_data})
    metrics.add_metric(name="TaskCreated", unit=MetricUnit.Count, value=1)
    return {"message": "Task created successfully", "task_id": task_data.get("id")}


@logger.inject_lambda_context(correlation_id_path=correlation_paths.API_GATEWAY_REST)
@tracer.capture_lambda_handler
@metrics.log_metrics(capture_cold_start_metric=True)
def lambda_handler(event, context):
    return app.resolve(event, context)
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
from __future__ import annotations

import json
from typing import Any

from aws_lambda_powertools import Logger, Metrics, Tracer
from aws_lambda_powertools.utilities.batch import BatchProcessor, EventType, process_partial_response
from aws_lambda_powertools.utilities.typing import LambdaContext

logger = Logger()
tracer = Tracer()
metrics = Metrics()

# Initialize batch processor for SQS
processor = BatchProcessor(event_type=EventType.SQS)


@tracer.capture_method
def record_handler(record):
    """Process individual SQS record"""
    try:
        # Parse message
        message_data = json.loads(record.body)
        task_id = message_data.get("task_id", "unknown")
        task_type = message_data.get("task_type", "default")

        logger.info("Processing task", extra={"task_id": task_id, "task_type": task_type})

        # Simulate work based on task type
        if task_type == "email":
            logger.info("Sending email", extra={"task_id": task_id})
        elif task_type == "report":
            logger.info("Generating report", extra={"task_id": task_id})
        else:
            logger.info("Processing default task", extra={"task_id": task_id})

        metrics.add_metric(name="TaskProcessed", unit="Count", value=1)
        metrics.add_metadata(key="task_type", value=task_type)

        return {"status": "success", "task_id": task_id}

    except Exception as e:
        logger.error("Task processing failed", extra={"error": str(e)})
        metrics.add_metric(name="TaskFailed", unit="Count", value=1)
        raise


@logger.inject_lambda_context
@tracer.capture_lambda_handler
@metrics.log_metrics
def lambda_handler(event: dict[str, Any], context: LambdaContext):
    """Process SQS messages using BatchProcessor"""

    return process_partial_response(
        event=event,
        record_handler=record_handler,
        processor=processor,
        context=context,
    )
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
#!/bin/bash

# Build all Lambda functions
pants package ::

# Process each Lambda function
for pex_file in dist/*.pex; do
    base_name=$(basename "$pex_file" .pex)

    # Create build directory for this function
    mkdir -p "build/$base_name"
    cd "build/$base_name"

    # Extract PEX contents
    python "../../$pex_file" --pex-root . --pex-path . -c "import sys; sys.exit(0)"

    # Create deployment zip
    zip -r "../../$base_name.zip" .
    cd ../..

    echo "✅ Created: $base_name.zip"
done