# Powertools for AWS Lambda (.NET) > Powertools for AWS Lambda (.NET) Powertools for AWS Lambda (.NET) is a developer toolkit to implement Serverless best practices and increase developer velocity. It provides a suite of utilities for AWS Lambda Functions that makes tracing with AWS X-Ray, structured logging and creating custom metrics asynchronously easier. # Project Overview # Powertools for AWS Lambda (.NET) Powertools for AWS Lambda (.NET) (which from here will be referred as Powertools) is a suite of utilities for [AWS Lambda](https://aws.amazon.com/lambda/) functions to ease adopting best practices such as tracing, structured logging, custom metrics, and more. Info **Supports .NET 6 and .NET 8 runtimes** Tip Powertools is also available for [Python](https://docs.powertools.aws.dev/lambda/python/), [Java](https://docs.powertools.aws.dev/lambda/java/), and [TypeScript](https://docs.powertools.aws.dev/lambda/typescript/latest/). Support this project by becoming a reference customer or sharing your work You can choose to support us in three ways: 1. [**Become a reference customers**](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/new?assignees=&labels=customer-reference&template=support_powertools.yml&title=%5BSupport+Lambda+Powertools%5D%3A+%3Cyour+organization+name%3E). This gives us permission to list your company in our documentation. 1. [**Share your work**](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/new?assignees=&labels=community-content&template=share_your_work.yml&title=%5BI+Made+This%5D%3A+%3CTITLE%3E). Blog posts, video, sample projects you used Powertools! ## Features Core utilities such as Tracing, Logging, and Metrics will be available across all Powertools for AWS Lambda languages. Additional utilities are subjective to each language ecosystem and customer demand. | Utility | Description | | --- | --- | | [Tracing](core/tracing/) | Decorators and utilities to trace Lambda function handlers, and both synchronous and asynchronous functions | | [Logger](core/logging/) | Structured logging made easier, and decorator to enrich structured logging with key Lambda context details | | [Metrics](core/metrics/) | Custom AWS metrics created asynchronously via CloudWatch Embedded Metric Format (EMF) | | [Parameters](./utilities/parameters/) | provides high-level functionality to retrieve one or multiple parameter values from [AWS Systems Manager Parameter Store](https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-parameter-store.html), [AWS Secrets Manager](https://aws.amazon.com/secrets-manager/), or [Amazon DynamoDB](https://aws.amazon.com/dynamodb/). We also provide extensibility to bring your own providers. | | [Idempotency](./utilities/idempotency/) | The idempotency utility provides a simple solution to convert your Lambda functions into idempotent operations which are safe to retry. | | [Batch Processing](./utilities/batch-processing/) | The batch processing utility handles partial failures when processing batches from Amazon SQS, Amazon Kinesis Data Streams, and Amazon DynamoDB Streams. | ## Install Powertools for AWS Lambda (.NET) is available as NuGet packages. You can install the packages from NuGet gallery or from Visual Studio editor. Search `AWS.Lambda.Powertools*` to see various utilities available. - [AWS.Lambda.Powertools.Tracing](https://www.nuget.org/packages?q=AWS.Lambda.Powertools.Tracing): `dotnet add package AWS.Lambda.Powertools.Tracing` - [AWS.Lambda.Powertools.Logging](https://www.nuget.org/packages?q=AWS.Lambda.Powertools.Logging): `dotnet add package AWS.Lambda.Powertools.Logging` - [AWS.Lambda.Powertools.Metrics](https://www.nuget.org/packages?q=AWS.Lambda.Powertools.Metrics): `dotnet add package AWS.Lambda.Powertools.Metrics` - [AWS.Lambda.Powertools.Parameters](https://www.nuget.org/packages?q=AWS.Lambda.Powertools.Parameters): `dotnet add package AWS.Lambda.Powertools.Parameters` - [AWS.Lambda.Powertools.Idempotency](https://www.nuget.org/packages?q=AWS.Lambda.Powertools.Idempotency): `dotnet add package AWS.Lambda.Powertools.Idempotency` - [AWS.Lambda.Powertools.BatchProcessing](https://www.nuget.org/packages?q=AWS.Lambda.Powertools.BatchProcessing): `dotnet add package AWS.Lambda.Powertools.BatchProcessing` ### Using SAM CLI template We have provided you with a custom template for the Serverless Application Model (AWS SAM) command-line interface (CLI). This generates a starter project that allows you to interactively choose the Powertools for AWS Lambda (.NET) features that enables you to include in your project. To use the SAM CLI, you need the following tools. - SAM CLI - [Install the SAM CLI](https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/serverless-sam-cli-install.html) - .NET 6.0 (LTS) - [Install .NET 6.0](https://www.microsoft.com/net/download) - Docker - [Install Docker community edition](https://hub.docker.com/search/?type=edition&offering=community) Once you have SAM CLI installed, follow the these steps to initialize a .NET 6 project using Powertools for AWS (.NET) 1. Run the following command in your command line ``` sam init -r dotnet6 ``` 1. Select option 1 as your template source ``` Which template source would you like to use? 1 - AWS Quick Start Templates 2 - Custom Template Location ``` 3. Select the `Hello World Example with Powertools for AWS Lambda` template ``` Choose an AWS Quick Start application template 1 - Hello World Example 2 - Data processing 3 - Hello World Example with Powertools for AWS Lambda 4 - Multi-step workflow 5 - Scheduled task 6 - Standalone function 7 - Serverless API Template: 3 ``` 1. Follow the rest of the prompts and give your project a name Viola! You now have a SAM application pre-configured with Powertools! ## Examples We have provided a few examples that should you how to use the each of the core Powertools for AWS Lambda (.NET) features. - [Tracing](https://github.com/aws-powertools/powertools-lambda-dotnet/tree/main/examples/Tracing) - [Logging](https://github.com/aws-powertools/powertools-lambda-dotnet/tree/main/examples/Logging/) - [Metrics](https://github.com/aws-powertools/powertools-lambda-dotnet/tree/main/examples/Metrics/) - [Serverless API](https://github.com/aws-powertools/powertools-lambda-dotnet/tree/main/examples/ServerlessApi/) - [Parameters](https://github.com/aws-powertools/powertools-lambda-dotnet/tree/main/examples/Parameters/) - [Idempotency](https://github.com/aws-powertools/powertools-lambda-dotnet/tree/main/examples/Idempotency/) - [Batch Processing](https://github.com/aws-powertools/powertools-lambda-dotnet/tree/main/examples/BatchProcessing/) ## Connect - **Powertools for AWS Lambda (.NET) on Discord**: `#dotnet` - **[Invite link](https://discord.gg/B8zZKbbyET)** - **Email**: aws-powertools-maintainers@amazon.com ## Support Powertools for AWS Lambda (.NET) There are many ways you can help us gain future investments to improve everyone's experience: - **Become a public reference** ______________________________________________________________________ Add your company name and logo on our [landing page](https://powertools.aws.dev). [GitHub Issue template](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/new?assignees=&labels=customer-reference&template=support_powertools.yml&title=%5BSupport+Lambda+Powertools%5D%3A+%3Cyour+organization+name%3E) - **Share your work** ______________________________________________________________________ Blog posts, video, and sample projects about Powertools for AWS Lambda. [GitHub Issue template](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/new?assignees=&labels=customer-reference&template=support_powertools.yml&title=%5BSupport+Lambda+Powertools%5D%3A+%3Cyour+organization+name%3E) - **Join the community** ______________________________________________________________________ Connect, ask questions, and share what features you use. [Discord invite](https://discord.gg/B8zZKbbyET) ### Becoming a reference customer Knowing which companies are using this library is important to help prioritize the project internally. The following companies, among others, use Powertools: [**Caylent**](https://caylent.com/) [**Pushpay**](https://pushpay.com/) ## Tenets These are our core principles to guide our decision making. - **AWS Lambda only**. We optimize for AWS Lambda function environments and supported runtimes only. Utilities might work with web frameworks and non-Lambda environments, though they are not officially supported. - **Eases the adoption of best practices**. The main priority of the utilities is to facilitate best practices adoption, as defined in the AWS Well-Architected Serverless Lens; all other functionality is optional. - **Keep it lean**. Additional dependencies are carefully considered for security and ease of maintenance, and prevent negatively impacting startup time. - **We strive for backwards compatibility**. New features and changes should keep backwards compatibility. If a breaking change cannot be avoided, the deprecation and migration process should be clearly defined. - **We work backwards from the community**. We aim to strike a balance of what would work best for 80% of customers. Emerging practices are considered and discussed via Requests for Comment (RFCs) - **Idiomatic**. Utilities follow programming language idioms and language-specific best practices. # Changelog All notable changes to this project will be documented in this file. See [Conventional Commits](https://conventionalcommits.org) for commit guidelines. ## [1.40](https://github.com/aws-powertools/powertools-lambda-dotnet/compare/1.30...1.40) - 2025-04-08 ## Bug Fixes - **build:** update ProjectReference condition to always include AWS.Lambda.Powertools.Common project - **tests:** update AWS_EXECUTION_ENV version in assertions to 1.0.0 ## Code Refactoring - enhance log buffer management to discard oversized entries and improve entry tracking - update logger factory and builder to support log output configuration - update parameter names and improve documentation in logging configuration classes - improve logging buffer management and configuration handling - replace SystemWrapper with ConsoleWrapper in tests and update logging methods. revert systemwrapper, revert lambda.core to 2.5.0 - replace SystemWrapper with ConsoleWrapper in tests and update logging methods. revert systemwrapper, revert lambda.core to 2.5.0 - enhance logger configuration and output handling. Fix tests - update log buffering options and improve serializer handling - clean up whitespace and improve logger configuration handling - change Logger class to static and enhance logging capabilities - **logging:** enhance IsEnabled method for improved log level handling ## Features - **console:** enhance ConsoleWrapper for test mode and output management - **lifecycle:** add LambdaLifecycleTracker to manage cold start state and initialization type - **logger:** enhance random number generation and improve regex match timeout - **logging:** introduce custom logger output and enhance configuration options - **logging:** add GetLogOutput method and CompositeJsonTypeInfoResolver for enhanced logging capabilities - **workflows:** update .NET version setup to support multiple versions and improve package handling - **workflows:** add examples tests and publish packages workflow; remove redundant test step ## Maintenance - update Microsoft.Extensions.DependencyInjection to version 8.0.1 - **deps:** bump actions/setup-node from 4.2.0 to 4.3.0 - **deps:** bump actions/setup-dotnet from 4.3.0 to 4.3.1 - **deps:** update AWS Lambda Powertools packages to latest versions ## Pull Requests - Merge pull request [#844](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/844) from hjgraca/fix/revert-common-setup - Merge pull request [#843](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/843) from hjgraca/fix/batch-example-nuget-update - Merge pull request [#842](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/842) from hjgraca/fix/update-example-nuget - Merge pull request [#841](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/841) from hjgraca/fix/execution-env-ignore-version - Merge pull request [#840](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/840) from hjgraca/fix/execution-env-version - Merge pull request [#832](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/832) from hjgraca/feature/logger-ilogger-instance - Merge pull request [#821](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/821) from aws-powertools/dependabot/github_actions/actions/setup-node-4.3.0 - Merge pull request [#820](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/820) from aws-powertools/dependabot/github_actions/actions/setup-dotnet-4.3.1 - Merge pull request [#835](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/835) from hjgraca/fix/override-lambda-console - Merge pull request [#834](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/834) from hjgraca/feature/coldstart-provisioned-concurrency - Merge pull request [#814](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/814) from hjgraca/chore/update-examples-130 - Merge pull request [#813](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/813) from hjgraca/chore/update-examples-130 ## [1.30](https://github.com/aws-powertools/powertools-lambda-dotnet/compare/1.20...1.30) - 2025-03-07 ## Bug Fixes - **build:** simplify dependency installation step in CI configuration - **build:** pass target framework properties during restore, build, and test steps - **build:** update test commands and project configurations for .NET frameworks - **build:** add SkipInvalidProjects property to build properties for .NET frameworks - **build:** add /tl option to dotnet build command in build.yml - **build:** update .NET setup step to use matrix variable for versioning - **ci:** Permissions ([#782](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/782)) - **ci:** Permissions and depdendencies - **ci:** add write for issues - **ci:** Add permissions to read issues and pull requests - **ci:** label PRs - **ci:** Workflow permissions ([#774](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/774)) - **ci:** Indentation issue - **metrics:** add null checks and unit tests for MetricsAspect and MetricsAttribute - **metrics:** rename variable for default dimensions in cold start handling - **metrics:** ensure thread safety by locking metrics during cold start flag reset - **tests:** correct command in e2e-tests.yml and remove unnecessary assertions in FunctionTests.cs - **tests:** conditionally include project reference for net8.0 framework ## Code Refactoring - **metrics:** simplify MetricsTests by removing unused variables and improving syntax - **metrics:** standardize parameter names for clarity in metric methods - **metrics:** standardize parameter names for metric methods to improve clarity ## Documentation - **metrics:** document breaking changes in metrics output format and default dimensions ## Features - **build:** increase verbosity for test and example runs in CI pipeline - **build:** enhance CI configuration with multi-framework support for .NET 6.0 and 8.0 - **ci:** Permissions updates - **metrics:** enhance cold start handling with default dimensions and add corresponding tests - **metrics:** enhance WithFunctionName method to handle null or empty values and add corresponding unit tests - **metrics:** update metrics to version 2.0.0, enhance cold start tracking, and improve documentation - **metrics:** update default dimensions handling and increase maximum dimensions limit - **metrics:** add Metrics.AspNetCore version to version.json - **metrics:** add ColdStartTracker for tracking cold starts in ASP.NET Core applications - **metrics:** enhance default dimensions handling and refactor metrics initialization. Adding default dimensions to cold start metrics - **metrics:** implement IConsoleWrapper for abstracting console operations and enhance cold start metric capturing - **metrics:** add unit tests for Metrics constructor and validation methods - **metrics:** always set namespace and service, update tests for service handling - **metrics:** add HandlerEmpty method and test for empty metrics exception handling - **metrics:** add HandlerRaiseOnEmptyMetrics method and corresponding test for empty metrics exception - **metrics:** enhance documentation for Cold Start Function Name dimension and update test classes - **metrics:** add support for disabling metrics via environment variable - **metrics:** add function name support for metrics dimensions - **metrics:** add support for default dimensions in metrics handling - **metrics:** introduce MetricsOptions for configurable metrics setup and refactor initialization logic - **metrics:** add ASP.NET Core metrics package with cold start tracking and middleware support for aspnetcore. Docs - **metrics:** enhance MetricsBuilder with detailed configuration options and improve documentation - **metrics:** add MetricsBuilder for fluent configuration of metrics options and enhance default dimensions handling - **metrics:** update TargetFramework to net8.0 and adjust MaxDimensions limit - **tests:** add unit tests for ConsoleWrapper and Metrics middleware extensions - **version:** update Metrics version to 2.0.0 in version.json ## Maintenance - Add openssf scorecard badge to readme ([#790](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/790)) - **deps:** bump jinja2 from 3.1.5 to 3.1.6 - **deps:** bump jinja2 from 3.1.5 to 3.1.6 in /docs - **deps:** bump squidfunk/mkdocs-material in /docs - **deps:** bump codecov/codecov-action from 5.3.1 to 5.4.0 - **deps:** bump github/codeql-action from 3.28.9 to 3.28.10 - **deps:** bump ossf/scorecard-action from 2.4.0 to 2.4.1 - **deps:** bump actions/upload-artifact from 4.6.0 to 4.6.1 - **deps:** bump squidfunk/mkdocs-material in /docs - **deps:** bump zgosalvez/github-actions-ensure-sha-pinned-actions - **deps:** bump squidfunk/mkdocs-material in /docs ## Pull Requests - Merge pull request [#811](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/811) from aws-powertools/chore/update-version - Merge pull request [#810](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/810) from aws-powertools/fix-release-drafter - Merge pull request [#807](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/807) from hjgraca/fix/metrics-namespace-service-not-present - Merge pull request [#805](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/805) from aws-powertools/dependabot/pip/jinja2-3.1.6 - Merge pull request [#804](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/804) from aws-powertools/dependabot/pip/docs/jinja2-3.1.6 - Merge pull request [#802](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/802) from hjgraca/fix/metrics-e2e-tests - Merge pull request [#801](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/801) from aws-powertools/dependabot/docker/docs/squidfunk/mkdocs-material-047452c6641137c9caa3647d050ddb7fa67b59ed48cc67ec3a4995f3d360ab32 - Merge pull request [#800](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/800) from hjgraca/fix/low-hanging-fruit-metrics-v2 - Merge pull request [#799](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/799) from aws-powertools/maintenance/workflow-branch-develop - Merge pull request [#797](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/797) from aws-powertools/fix-version-comma - Merge pull request [#793](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/793) from aws-powertools/dependabot/github_actions/codecov/codecov-action-5.4.0 - Merge pull request [#791](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/791) from gregsinclair42/CheckForValidLambdaContext - Merge pull request [#786](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/786) from hjgraca/feature/metrics-disabled - Merge pull request [#785](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/785) from hjgraca/feature/metrics-function-name - Merge pull request [#780](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/780) from hjgraca/feature/metrics-single-default-dimensions - Merge pull request [#775](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/775) from hjgraca/feature/metrics-aspnetcore - Merge pull request [#771](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/771) from hjgraca/feature/metrics-default-dimensions-coldstart - Merge pull request [#789](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/789) from aws-powertools/permissions - Merge pull request [#788](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/788) from aws-powertools/pr_merge - Merge pull request [#787](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/787) from aws-powertools/indentation - Merge pull request [#767](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/767) from aws-powertools/maintenance/sitemap - Merge pull request [#778](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/778) from aws-powertools/dependabot/github_actions/github/codeql-action-3.28.10 - Merge pull request [#777](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/777) from aws-powertools/dependabot/github_actions/ossf/scorecard-action-2.4.1 - Merge pull request [#776](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/776) from aws-powertools/dependabot/github_actions/actions/upload-artifact-4.6.1 - Merge pull request [#770](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/770) from aws-powertools/dependabot/docker/docs/squidfunk/mkdocs-material-26153027ff0b192d3dbea828f2fe2dd1bf6ff753c58dd542b3ddfe866b08bf60 - Merge pull request [#666](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/666) from hjgraca/fix(metrics)-dimessions-with-missing-array - Merge pull request [#768](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/768) from aws-powertools/dependabot/github_actions/zgosalvez/github-actions-ensure-sha-pinned-actions-3.0.22 - Merge pull request [#764](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/764) from aws-powertools/dependabot/docker/docs/squidfunk/mkdocs-material-f5bcec4e71c138bcb89c0dccb633c830f54a0218e1aefedaade952b61b908d00 ## [1.20](https://github.com/aws-powertools/powertools-lambda-dotnet/compare/1.19...1.20) - 2025-02-11 ## Features - **idempotency:** add support for custom key prefixes in IdempotencyHandler and related tests - **tests:** add unit tests for IdempotencySerializer and update JSON options handling ## Maintenance - add openssf scorecard workflow - **deps:** bump squidfunk/mkdocs-material in /docs - **deps:** bump squidfunk/mkdocs-material in /docs - **deps:** bump actions/upload-artifact from 4.5.0 to 4.6.0 - **deps:** bump github/codeql-action from 3.28.8 to 3.28.9 - **deps:** bump zgosalvez/github-actions-ensure-sha-pinned-actions - **deps:** bump aws-actions/configure-aws-credentials - **deps:** bump squidfunk/mkdocs-material in /docs - **deps:** bump github/codeql-action from 3.27.9 to 3.28.9 - **deps:** bump github/codeql-action from 3.28.6 to 3.28.8 - **deps:** bump actions/setup-dotnet from 4.2.0 to 4.3.0 - **deps:** bump github/codeql-action from 3.28.5 to 3.28.6 - **deps:** bump actions/setup-python from 5.3.0 to 5.4.0 - **deps:** bump aws-actions/configure-aws-credentials - **deps:** bump pygments from 2.13.0 to 2.15.0 ## Pull Requests - Merge pull request [#755](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/755) from aws-powertools/dependabot/github_actions/aws-actions/configure-aws-credentials-4.1.0 - Merge pull request [#754](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/754) from aws-powertools/dependabot/github_actions/actions/upload-artifact-4.6.0 - Merge pull request [#753](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/753) from aws-powertools/dependabot/github_actions/github/codeql-action-3.28.9 - Merge pull request [#757](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/757) from hjgraca/docs/roadmap-2025-update - Merge pull request [#758](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/758) from aws-powertools/docs/idempotency-prefix - Merge pull request [#743](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/743) from aws-powertools/release(1.20)-update-versions - Merge pull request [#355](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/355) from aws-powertools/dependabot/pip/pygments-2.15.0 - Merge pull request [#751](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/751) from aws-powertools/dependabot/github_actions/github/codeql-action-3.28.9 - Merge pull request [#750](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/750) from aws-powertools/dependabot/github_actions/zgosalvez/github-actions-ensure-sha-pinned-actions-3.0.21 - Merge pull request [#748](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/748) from aws-powertools/dependabot/docker/docs/squidfunk/mkdocs-material-c62453b1ba229982c6325a71165c1a3007c11bd3dd470e7a1446c5783bd145b4 - Merge pull request [#745](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/745) from hjgraca/feature/idempotency-key-prefix - Merge pull request [#747](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/747) from aws-powertools/mkdocs/privacy-plugin - Merge pull request [#653](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/653) from hjgraca/aot(idempotency|jmespath)-aot-support - Merge pull request [#744](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/744) from aws-powertools/dependabot/docker/docs/squidfunk/mkdocs-material-7e841df1cfb6c8c4ff0968f2cfe55127fb1a2f5614e1c9bc23cbc11fe4c96644 - Merge pull request [#738](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/738) from hjgraca/feat(e2e)-idempotency-e2e-tests - Merge pull request [#741](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/741) from hjgraca/fix(tracing)-invalid-sement-name - Merge pull request [#739](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/739) from aws-powertools/dependabot/docker/docs/squidfunk/mkdocs-material-471695f3e611d9858788ac04e4daa9af961ccab73f1c0f545e90f8cc5d4268b8 - Merge pull request [#736](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/736) from aws-powertools/dependabot/github_actions/actions/setup-dotnet-4.3.0 - Merge pull request [#737](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/737) from aws-powertools/dependabot/github_actions/github/codeql-action-3.28.8 - Merge pull request [#734](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/734) from aws-powertools/fix-apidocs-build - Merge pull request [#727](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/727) from aws-powertools/dependabot/github_actions/github/codeql-action-3.28.6 - Merge pull request [#725](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/725) from aws-powertools/dependabot/github_actions/aws-actions/configure-aws-credentials-4.0.3 - Merge pull request [#726](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/726) from aws-powertools/dependabot/github_actions/actions/setup-python-5.4.0 - Merge pull request [#731](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/731) from aws-powertools/patch-do-not-pack-tests ## [1.19](https://github.com/aws-powertools/powertools-lambda-dotnet/compare/1.18...1.19) - 2025-01-28 ## Maintenance - **deps:** bump codecov/codecov-action from 5.3.0 to 5.3.1 - **deps:** bump github/codeql-action from 3.28.4 to 3.28.5 - **deps:** bump actions/upload-artifact from 4.5.0 to 4.6.0 - **deps:** bump actions/checkout from 4.1.7 to 4.2.2 - **deps:** bump zgosalvez/github-actions-ensure-sha-pinned-actions - **deps:** bump release-drafter/release-drafter from 5.21.1 to 6.1.0 - **deps:** bump codecov/codecov-action from 4.5.0 to 5.3.0 - **deps:** bump actions/github-script from 6 to 7 - **deps:** bump github/codeql-action from 2.1.18 to 3.28.4 - **deps:** bump actions/upload-artifact from 3 to 4 - **deps:** bump aws-actions/configure-aws-credentials - **deps:** bump actions/setup-dotnet from 3.0.3 to 4.2.0 ## Pull Requests - Merge pull request [#728](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/728) from aws-powertools/hjgraca-docs-service - Merge pull request [#724](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/724) from aws-powertools/release(1.19)-update-versions - Merge pull request [#704](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/704) from hjgraca/fix(logging)-service-name-override - Merge pull request [#722](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/722) from aws-powertools/dependabot/github_actions/codecov/codecov-action-5.3.1 - Merge pull request [#721](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/721) from aws-powertools/dependabot/github_actions/github/codeql-action-3.28.5 - Merge pull request [#714](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/714) from aws-powertools/dependabot/github_actions/codecov/codecov-action-5.3.0 - Merge pull request [#715](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/715) from aws-powertools/dependabot/github_actions/release-drafter/release-drafter-6.1.0 - Merge pull request [#716](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/716) from aws-powertools/dependabot/github_actions/zgosalvez/github-actions-ensure-sha-pinned-actions-3.0.20 - Merge pull request [#717](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/717) from aws-powertools/dependabot/github_actions/actions/checkout-4.2.2 - Merge pull request [#720](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/720) from aws-powertools/chore/e2e-libraries-path - Merge pull request [#718](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/718) from aws-powertools/dependabot/github_actions/actions/upload-artifact-4.6.0 - Merge pull request [#713](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/713) from aws-powertools/chore(e2e)-concurrency - Merge pull request [#707](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/707) from aws-powertools/dependabot/github_actions/actions/setup-dotnet-4.2.0 - Merge pull request [#708](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/708) from aws-powertools/dependabot/github_actions/aws-actions/configure-aws-credentials-4.0.2 - Merge pull request [#711](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/711) from aws-powertools/dependabot/github_actions/actions/github-script-7 - Merge pull request [#710](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/710) from aws-powertools/dependabot/github_actions/github/codeql-action-3.28.4 - Merge pull request [#709](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/709) from aws-powertools/dependabot/github_actions/actions/upload-artifact-4 - Merge pull request [#706](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/706) from aws-powertools/ci/dependabot - Merge pull request [#700](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/700) from hjgraca/hjgraca-e2e-aot - Merge pull request [#679](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/679) from hjgraca/dep(examples)-update-examples-dep - Merge pull request [#682](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/682) from aws-powertools/dependabot/pip/jinja2-3.1.5 - Merge pull request [#699](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/699) from hjgraca/aot-e2e-tests - Merge pull request [#698](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/698) from ankitdhaka07/issue-697 ## [1.18](https://github.com/aws-powertools/powertools-lambda-dotnet/compare/1.17...1.18) - 2025-01-14 ## Pull Requests - Merge pull request [#695](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/695) from aws-powertools/update-versio-release118 - Merge pull request [#692](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/692) from hjgraca/feature/e2etests - Merge pull request [#691](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/691) from aws-powertools/hjgraca-patch-e2e-6 - Merge pull request [#690](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/690) from aws-powertools/hjgraca-patch-e2e-5 - Merge pull request [#689](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/689) from aws-powertools/hjgraca-patch-e2e-4 - Merge pull request [#688](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/688) from aws-powertools/hjgraca-patch-e2e-3 - Merge pull request [#687](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/687) from aws-powertools/hjgraca-patch-e2e-2 - Merge pull request [#686](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/686) from aws-powertools/hjgraca-patch-e2e - Merge pull request [#685](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/685) from hjgraca/feat-e2e - Merge pull request [#684](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/684) from hjgraca/feature/e2etests - Merge pull request [#681](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/681) from hjgraca/feat(logging)-inner-exception ## [1.17](https://github.com/aws-powertools/powertools-lambda-dotnet/compare/1.16...1.17) - 2024-11-12 ## Pull Requests - Merge pull request [#675](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/675) from hjgraca/fix(tracing)-aot-void-task-and-serialization ## [1.16](https://github.com/aws-powertools/powertools-lambda-dotnet/compare/1.15...1.16) - 2024-10-22 ## Pull Requests - Merge pull request [#672](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/672) from aws-powertools/hjgraca-logging-release115 - Merge pull request [#670](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/670) from hjgraca/fix(logging)-enum-serialization - Merge pull request [#664](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/664) from hjgraca/fix(metrics)-multiple-dimension-array ## [1.15](https://github.com/aws-powertools/powertools-lambda-dotnet/compare/1.14...1.15) - 2024-10-05 ## Pull Requests - Merge pull request [#660](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/660) from hjgraca/fix(tracing)-revert-imethodaspecthander-removal - Merge pull request [#657](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/657) from hjgraca/fix(logging)-typeinforesolver-non-aot - Merge pull request [#646](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/646) from lachriz-aws/feature/throw-on-full-batch-failure-option - Merge pull request [#652](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/652) from hjgraca/chore(dependencies)-update-logging-examples ## [1.14](https://github.com/aws-powertools/powertools-lambda-dotnet/compare/1.13...1.14) - 2024-09-24 ## Pull Requests - Merge pull request [#649](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/649) from hjgraca/(docs)-update-logging-aot - Merge pull request [#628](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/628) from hjgraca/aot(logging)-support-logging - Merge pull request [#645](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/645) from aws-powertools/chore(examples)Update-examples-release-1.13 - Merge pull request [#643](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/643) from hjgraca/fix(dependencies)-Fix-Common-dependency - Merge pull request [#641](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/641) from hjgraca/fix(references)-build-targets-common ## [1.13](https://github.com/aws-powertools/powertools-lambda-dotnet/compare/1.12...1.13) - 2024-08-29 ## Maintenance - **docs:** load self hosted mermaid.js - **docs:** load self hosted mermaid.js - **docs:** Caylent customer reference ## Pull Requests - Merge pull request [#639](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/639) from aws-powertools/fix(docs)-missing-closing-tag - Merge pull request [#638](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/638) from aws-powertools/release(1.13)-update-versions - Merge pull request [#622](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/622) from aws-powertools/fix-typo-tracing-docs - Merge pull request [#632](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/632) from hjgraca/fix(tracing)-batch-handler-result-null-reference - Merge pull request [#633](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/633) from hjgraca/publicref/pushpay - Merge pull request [#627](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/627) from hjgraca/fix-idempotency-jmespath-dependency - Merge pull request [#625](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/625) from hjgraca/docs(public_reference)-add-Caylent-as-a-public-reference - Merge pull request [#623](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/623) from hjgraca/chore-update-tracing-examples-150 ## [1.12](https://github.com/aws-powertools/powertools-lambda-dotnet/compare/1.11.1...1.12) - 2024-07-24 ## Maintenance - **deps-dev:** bump zipp from 3.11.0 to 3.19.1 ## Pull Requests - Merge pull request [#607](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/607) from hjgraca/aot-tracing-support - Merge pull request [#610](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/610) from aws-powertools/dependabot/pip/zipp-3.19.1 - Merge pull request [#617](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/617) from hjgraca/example-update-release-1.11.1 ## [1.11.1](https://github.com/aws-powertools/powertools-lambda-dotnet/compare/1.11...1.11.1) - 2024-07-12 ## Pull Requests - Merge pull request [#613](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/613) from hjgraca/fix-metrics-resolution-context ## [1.11](https://github.com/aws-powertools/powertools-lambda-dotnet/compare/1.10.2...1.11) - 2024-07-09 ## [1.10.2](https://github.com/aws-powertools/powertools-lambda-dotnet/compare/1.10.1...1.10.2) - 2024-07-09 ## Maintenance - **deps:** bump jinja2 from 3.1.3 to 3.1.4 ## Pull Requests - Merge pull request [#579](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/579) from aws-powertools/dependabot/pip/jinja2-3.1.4 - Merge pull request [#602](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/602) from hjgraca/aot-metrics-support - Merge pull request [#605](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/605) from aws-powertools/hjgraca-codecov - Merge pull request [#600](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/600) from aws-powertools/hjgraca-examples-1.10.1 ## [1.10.1](https://github.com/aws-powertools/powertools-lambda-dotnet/compare/1.10.0...1.10.1) - 2024-05-22 ## Pull Requests - Merge pull request [#596](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/596) from aws-powertools/hjgraca-update-version-1.10.1 - Merge pull request [#594](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/594) from hjgraca/metrics-thread-safety-bug - Merge pull request [#589](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/589) from aws-powertools/hjgraca-idempotency-examples - Merge pull request [#590](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/590) from hjgraca/fix-jmespath-dep ## [1.10.0](https://github.com/aws-powertools/powertools-lambda-dotnet/compare/1.9.2...1.10.0) - 2024-05-09 ## [1.9.2](https://github.com/aws-powertools/powertools-lambda-dotnet/compare/1.9.1...1.9.2) - 2024-05-09 ## Documentation - add link to Powertools for AWS Lambda workshop ## Pull Requests - Merge pull request [#586](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/586) from aws-powertools/hjgraca-version-release-1-10 - Merge pull request [#578](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/578) from hjgraca/feature/jmespath-powertools - Merge pull request [#584](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/584) from aws-powertools/hjgraca-build-pipeline - Merge pull request [#581](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/581) from dreamorosi/docs/link_workshop ## [1.9.1](https://github.com/aws-powertools/powertools-lambda-dotnet/compare/1.9.0...1.9.1) - 2024-03-21 ## Pull Requests - Merge pull request [#575](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/575) from aws-powertools/release-191 - Merge pull request [#572](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/572) from hjgraca/fix-tracing-duplicate-generic-method-decorator - Merge pull request [#569](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/569) from aws-powertools/hjgraca-update-docs-dotnet8 ## [1.9.0](https://github.com/aws-powertools/powertools-lambda-dotnet/compare/1.8.5...1.9.0) - 2024-03-11 ## Pull Requests - Merge pull request [#565](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/565) from aws-powertools/update-nuget-examples - Merge pull request [#564](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/564) from amirkaws/update-nuget-versions-for-examples - Merge pull request [#563](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/563) from amirkaws/release-version-1.9.0 - Merge pull request [#561](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/561) from amirkaws/update-nuget-versions - Merge pull request [#555](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/555) from aws-powertools/hjgraca-update-examples-185 - Merge pull request [#559](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/559) from amirkaws/add-configuration-parameter-provider ## [1.8.5](https://github.com/aws-powertools/powertools-lambda-dotnet/compare/1.8.4...1.8.5) - 2024-02-16 ## Documentation - updated we made this section with video series from Rahul and workshops ## Maintenance - **deps:** bump jinja2 from 3.1.2 to 3.1.3 - **deps:** bump gitpython from 3.1.37 to 3.1.41 ## Pull Requests - Merge pull request [#552](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/552) from aws-powertools/hjgraca-update-version-185 - Merge pull request [#538](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/538) from hjgraca/hendle-exception-logger - Merge pull request [#547](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/547) from aws-powertools/hjgraca-batch-docs - Merge pull request [#548](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/548) from H1Gdev/doc - Merge pull request [#542](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/542) from hjgraca/dotnet8-support - Merge pull request [#539](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/539) from aws-powertools/dependabot/pip/gitpython-3.1.41 - Merge pull request [#540](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/540) from aws-powertools/dependabot/pip/jinja2-3.1.3 - Merge pull request [#544](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/544) from aws-powertools/hjgraca-docs-auto-disable-tracing - Merge pull request [#536](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/536) from sliedig/develop ## [1.8.4](https://github.com/aws-powertools/powertools-lambda-dotnet/compare/1.8.3...1.8.4) - 2023-12-12 ## Pull Requests - Merge pull request [#532](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/532) from aws-powertools/hjgraca-update-batch-ga - Merge pull request [#528](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/528) from aws-powertools/idempotency-183-examples ## [1.8.3](https://github.com/aws-powertools/powertools-lambda-dotnet/compare/1.8.2...1.8.3) - 2023-11-21 ## Pull Requests - Merge pull request [#525](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/525) from aws-powertools/idempotency-ga - Merge pull request [#523](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/523) from hjgraca/update-examples-182 - Merge pull request [#513](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/513) from hjgraca/idempotency-method-e2e-test - Merge pull request [#521](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/521) from hjgraca/182-fix-examples-logging-batch ## [1.8.2](https://github.com/aws-powertools/powertools-lambda-dotnet/compare/1.8.1...1.8.2) - 2023-11-16 ## Pull Requests - Merge pull request [#518](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/518) from aws-powertools/hjgraca-version-1.8.2 - Merge pull request [#516](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/516) from hjgraca/lambda-log-level - Merge pull request [#510](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/510) from aws-powertools/hjgraca-examples-1.8.1 ## [1.8.1](https://github.com/aws-powertools/powertools-lambda-dotnet/compare/1.8.0...1.8.1) - 2023-10-30 ## Maintenance - **deps:** bump gitpython from 3.1.35 to 3.1.37 ## Pull Requests - Merge pull request [#507](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/507) from aws-powertools/hjgraca-release-1.8.1 - Merge pull request [#505](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/505) from hjgraca/fix-exception-addmetadata - Merge pull request [#499](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/499) from hjgraca/metrics-decorator-exception - Merge pull request [#503](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/503) from hjgraca/dateonly-converter - Merge pull request [#502](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/502) from aws-powertools/dependabot/pip/gitpython-3.1.37 - Merge pull request [#495](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/495) from hjgraca/update-projects-readme - Merge pull request [#493](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/493) from hjgraca/release1.8.0-example-updates - Merge pull request [#492](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/492) from aws-powertools/update-changelog-6248167844 ## [1.8.0](https://github.com/aws-powertools/powertools-lambda-dotnet/compare/1.7.1...1.8.0) - 2023-09-20 ## Documentation - add kinesis and dynamodb ## Pull Requests - Merge pull request [#489](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/489) from amirkaws/release-version-1.8.0 - Merge pull request [#337](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/337) from lachriz-aws/feature/batch-processing ## [1.7.1](https://github.com/aws-powertools/powertools-lambda-dotnet/compare/1.7.0...1.7.1) - 2023-09-19 ## Pull Requests - Merge pull request [#486](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/486) from amirkaws/update-examples-nuget-versions - Merge pull request [#484](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/484) from aws-powertools/hjgraca-release-1.7.1 - Merge pull request [#482](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/482) from aws-powertools/hjgraca-release-1.7.1 - Merge pull request [#480](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/480) from hjgraca/bug-revert-aspectinjector - Merge pull request [#479](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/479) from aws-powertools/hjgraca-update-examples-1.7.0 - Merge pull request [#477](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/477) from aws-powertools/hjgraca-delete-dependabot ## [1.7.0](https://github.com/aws-powertools/powertools-lambda-dotnet/compare/1.6.0...1.7.0) - 2023-09-14 ## Maintenance - **deps:** bump gitpython from 3.1.30 to 3.1.35 ## Pull Requests - Merge pull request [#475](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/475) from aws-powertools/hjgraca-disable-dependabot - Merge pull request [#464](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/464) from aws-powertools/hjgraca-release-1.7.0 - Merge pull request [#451](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/451) from aws-powertools/dependabot/pip/gitpython-3.1.35 - Merge pull request [#340](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/340) from hjgraca/fix-common-dependency - Merge pull request [#437](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/437) from aws-powertools/update-changelog-6143683791 - Merge pull request [#435](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/435) from amirkaws/release-version-1.6.0-update-examples ## [1.6.0](https://github.com/aws-powertools/powertools-lambda-dotnet/compare/1.5.0...1.6.0) - 2023-09-07 ## Pull Requests - Merge pull request [#433](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/433) from amirkaws/release-version-1.6.0-in-preview-fix - Merge pull request [#432](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/432) from amirkaws/release-version-1.6.0-document-fix - Merge pull request [#430](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/430) from amirkaws/release-version-1.6.0 - Merge pull request [#428](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/428) from amirkaws/automatic-xray-register - Merge pull request [#400](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/400) from aws-powertools/hjgraca-fix-idempotency-docs - Merge pull request [#391](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/391) from aws-powertools/hjgraca-patch-dependabot ## [1.5.0](https://github.com/aws-powertools/powertools-lambda-dotnet/compare/1.4.2...1.5.0) - 2023-08-29 ## Pull Requests - Merge pull request [#397](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/397) from aws-powertools/hjgraca-version-1.5.0 - Merge pull request [#363](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/363) from hjgraca/idempotency-inprogressexpiration ## [1.4.2](https://github.com/aws-powertools/powertools-lambda-dotnet/compare/1.4.1...1.4.2) - 2023-08-22 ## Maintenance - **deps:** bump AWSSDK.SecretsManager in /libraries - **deps:** bump xunit from 2.4.1 to 2.4.2 in /libraries - **deps:** bump Moq from 4.18.1 to 4.18.4 in /libraries - **deps:** bump AWSSDK.DynamoDBv2 in /libraries - **deps:** bump Testcontainers from 3.2.0 to 3.3.0 in /libraries - **deps:** bump AWSXRayRecorder.Core in /libraries ## Pull Requests - Merge pull request [#388](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/388) from amirkaws/update-samples-nuget-versions - Merge pull request [#383](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/383) from aws-powertools/hjgraca-patch-version-1.4.2 - Merge pull request [#381](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/381) from aws-powertools/hjgraca-patch-dependabot - Merge pull request [#375](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/375) from amirkaws/custom-log-formatter - Merge pull request [#357](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/357) from hjgraca/fix-capture-stacktrace - Merge pull request [#343](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/343) from aws-powertools/update-changelog-5462167625 - Merge pull request [#370](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/370) from amirkaws/replace-moq-with-nsubstitute - Merge pull request [#359](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/359) from aws-powertools/dependabot/nuget/libraries/develop/AWSSDK.SecretsManager-3.7.200.3 - Merge pull request [#342](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/342) from hjgraca/idempotency-simpler-example - Merge pull request [#347](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/347) from hjgraca/docs-clarify-xray-over-adot - Merge pull request [#338](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/338) from aws-powertools/hjgraca-patch-boring-cyborg - Merge pull request [#349](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/349) from hossambarakat/feature/idempotent-function - Merge pull request [#328](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/328) from aws-powertools/dependabot/nuget/libraries/develop/xunit-2.4.2 - Merge pull request [#327](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/327) from aws-powertools/dependabot/nuget/libraries/develop/AWSSDK.DynamoDBv2-3.7.104.1 - Merge pull request [#325](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/325) from aws-powertools/dependabot/nuget/libraries/develop/AWSXRayRecorder.Core-2.14.0 - Merge pull request [#326](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/326) from aws-powertools/dependabot/nuget/libraries/develop/Testcontainers-3.3.0 - Merge pull request [#329](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/329) from aws-powertools/dependabot/nuget/libraries/develop/Moq-4.18.4 ## [1.4.1](https://github.com/aws-powertools/powertools-lambda-dotnet/compare/1.4.0...1.4.1) - 2023-06-29 ## Maintenance - remove GH pages ## Pull Requests - Merge pull request [#318](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/318) from aws-powertools/hjgraca-dependabot-location - Merge pull request [#324](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/324) from amirkaws/release-version-1.4.1 - Merge pull request [#320](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/320) from aws-powertools/hjgraca-idempotency-example-fix - Merge pull request [#319](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/319) from aws-powertools/hjgraca-idempotency-example-DynamoDBv2-version - Merge pull request [#312](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/312) from hjgraca/metrics-prevent-exceed-100-datapoint - Merge pull request [#317](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/317) from aws-powertools/hjgraca-add-dependabot - Merge pull request [#300](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/300) from hossambarakat/feature/idempotency-example - Merge pull request [#298](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/298) from amirkaws/parameters-example - Merge pull request [#315](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/315) from aws-powertools/url-updates - Merge pull request [#314](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/314) from aws-powertools/readme-updates - Merge pull request [#313](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/313) from hjgraca/metrics-addmetric-raceconditiom - Merge pull request [#287](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/287) from swimming-potato/develop - Merge pull request [#303](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/303) from aws-powertools/remove-gh-pages - Merge pull request [#308](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/308) from aws-powertools/update-changelog-5332675096 ## [1.4.0](https://github.com/aws-powertools/powertools-lambda-dotnet/compare/1.3.0...1.4.0) - 2023-06-21 ## Bug Fixes - update team name - update references - updated codeowners - reapplying some lins that got screwed up in the merge ## Documentation - adding permission ## Features - **docs:** Start S3 Docs ## Maintenance - updated code owners - Change repo URL to the new location - rename project to Powertools for AWS Lambda (.NET) - **ci:** updated links to new repo - **ci:** removed unnecessary areas - **docs:** fix we made this link - **docs:** update docs homepage with additional features, fixed dot cli commands, new SAM cli instructions - **docs:** updated readme with idempotency package and examples for parameters and idempotency - **docs:** move idempotency ## Pull Requests - Merge pull request [#305](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/305) from aws-powertools/version-bump-1.4 - Merge pull request [#301](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/301) from sliedig/sliedig-docs - Merge pull request [#302](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/302) from aws-powertools/rename-part2 - Merge pull request [#291](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/291) from aws-powertools/doc-updates-roadmap - Merge pull request [#285](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/285) from aws-powertools/url-rename - Merge pull request [#293](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/293) from glynn1211/develop - Merge pull request [#163](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/163) from hossambarakat/feature/idempotency - Merge pull request [#282](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/282) from awslabs/rename - Merge pull request [#277](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/277) from awslabs/update-changelog-4981653012 - Merge pull request [#274](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/274) from awslabs/dependabot/pip/pymdown-extensions-10.0 - Merge pull request [#276](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/276) from awslabs/pymdown-extension-fix - Merge pull request [#278](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/278) from awslabs/s3-docs - Merge pull request [#273](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/273) from leandrodamascena/parameters/docs - Merge pull request [#271](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/271) from amirkaws/amirkaws-fix-parameters-nuget-icon ## [1.3.0](https://github.com/aws-powertools/powertools-lambda-dotnet/compare/1.2.0...1.3.0) - 2023-05-12 ## Documentation - fixed formatting and updated content ## Features - add package readme ## Maintenance - **ci:** skip analytics on forks ## Pull Requests - Merge pull request [#268](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/268) from amirkaws/amirkaws-release-version-1.3.0 - Merge pull request [#167](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/167) from amirkaws/amirkaws-feature-parameters - Merge pull request [#1](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/1) from sliedig/amirkaws-feature-parameters - Merge pull request [#264](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/264) from awslabs/chore(ci)-skip-analytics-on-forks - Merge pull request [#262](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/262) from awslabs/chorebump-version-1.2.0-release ## [1.2.0](https://github.com/aws-powertools/powertools-lambda-dotnet/compare/1.1.0...1.2.0) - 2023-05-05 ## Pull Requests - Merge pull request [#258](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/258) from amirkaws/amirkaws-release-version-1.2.0 - Merge pull request [#226](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/226) from hjgraca/feat_support_high_resolution_metrics ## [1.1.0](https://github.com/aws-powertools/powertools-lambda-dotnet/compare/v1.0.1...1.1.0) - 2023-05-05 ## Maintenance - add Lambda Powertools for Python in issue templates - add workflow to dispatch analytics fetching - **ci:** add workflow to dispatch analytics fetching ## Pull Requests - Merge pull request [#255](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/255) from awslabs/fix-remove-real-env-tests - Merge pull request [#253](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/253) from amirkaws/amirkaws-release-v1.1.0 - Merge pull request [#246](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/246) from hjgraca/feat_set-execution-context - Merge pull request [#251](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/251) from leandrodamascena/issues-templates/python - Merge pull request [#241](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/241) from awslabs/update-changelog-4691350388 - Merge pull request [#237](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/237) from hjgraca/changelog-update-pipeline - Merge pull request [#235](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/235) from amirkaws/amirkaws-update-examples-nuget-references-release-v1.0.1 ## [v1.0.1](https://github.com/aws-powertools/powertools-lambda-dotnet/compare/v1.0.0...v1.0.1) - 2023-04-06 ## Pull Requests - Merge pull request [#232](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/232) from amirkaws/amirkaws-release-v1.0.1 - Merge pull request [#227](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/227) from hjgraca/chore_fix_changelog_build - Merge pull request [#225](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/225) from srcsakthivel/develop - Merge pull request [#223](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/223) from hjgraca/fix_tracing_on_exception_thrown - Merge pull request [#218](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/218) from amirkaws/amirkaws-update-examples-nuget-references-release ## [v1.0.0](https://github.com/aws-powertools/powertools-lambda-dotnet/compare/v0.0.2-preview...v1.0.0) - 2023-02-24 ## Bug Fixes - removing manual trigger on docs wf ## Documentation - **home:** update powertools definition ## Maintenance - **ci:** api docs build update ([#188](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/188)) - **ci:** changing trigger to run manually - **ci:** updated api docs implementation - **ci:** updated bug report template - **deps:** updates sample deps - **docs:** incorrect crefs ## Pull Requests - Merge pull request [#215](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/215) from amirkaws/amirkaws-release-v1.0.0 - Merge pull request [#208](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/208) from amirkaws/amirkaws-cold-start-capture-warning-bug-fix - Merge pull request [#210](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/210) from awslabs/powertools-definition-update - Merge pull request [#209](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/209) from amirkaws/amirkaws-metrics-timestamp-fix - Merge pull request [#199](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/199) from awslabs/sliedig-ci-reviewers - Merge pull request [#193](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/193) from hjgraca/maintenance-new-issue-template - Merge pull request [#202](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/202) from hjgraca/fix-test-json-escaping - Merge pull request [#195](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/195) from hjgraca/update-dotnet-sdk-6.0.405 - Merge pull request [#194](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/194) from hjgraca/patch-1 - Merge pull request [#192](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/192) from hjgraca/fix-incorrect-crefs - Merge pull request [#189](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/189) from sliedig/develop - Merge pull request [#185](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/185) from amirkaws/amirkaws-update-examples-nuget-references - Merge pull request [#183](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/183) from awslabs/develop - Merge pull request [#150](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/150) from awslabs/develop - Merge pull request [#140](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/140) from awslabs/develop ## [v0.0.2-preview](https://github.com/aws-powertools/powertools-lambda-dotnet/compare/v0.0.1-preview.1...v0.0.2-preview) - 2023-01-18 ## Bug Fixes - removed duplicate template issue - updated logger casing env vars in samples ## Documentation - typo in metrics README ## Features - **ci:** codeql static code analysis ([#148](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/148)) ## Maintenance - updated setup-dotnet[@v1](https://github.com/v1) to [@v3](https://github.com/v3) - updated packages ([#172](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/172)) - **ci:** bumped version - **ci:** minor updates, licensing - **deps:** bump gitpython from 3.1.29 to 3.1.30 - **docs:** updated documentation ([#175](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/175)) - **docs:** add discord invitation link ## Pull Requests - Merge pull request [#182](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/182) from sliedig/develop - Merge pull request [#181](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/181) from awslabs/dependabot/pip/gitpython-3.1.30 - Merge pull request [#179](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/179) from sliedig/sliedig-ci - Merge pull request [#174](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/174) from sliedig/sliedig-ci - Merge pull request [#173](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/173) from sliedig/sliedig-samples - Merge pull request [#157](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/157) from kenfdev/fix-readme-title-typo - Merge pull request [#170](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/170) from nCubed/develop - Merge pull request [#152](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/152) from amirkaws/amirkaws-custom-exception-json-converter - Merge pull request [#155](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/155) from sthuber90/add-discord-link-154 - Merge pull request [#147](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/147) from amirkaws/amirkaws-fix-doc-links ## v0.0.1-preview.1 - 2022-08-01 ## Bug Fixes - force directy rename - making test function compile. - updating issue templates with correct extention - updated auto assign - skip duplicate nuget packages publish - added missing runtimes - updated Logging template description - updated documentation and doc generation ([#96](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/96)) - removed optional doc file paths - explicitly adding doc files for build configurations - resolving dependecy alert CVE-2020-8116 - added missing codecov packages for test projects - fixed build - forcing rename - fixed powertolls spelling in docs - replaced PackageIconUrl which is being depreciated with PackageIcon - added missing include to pack README files - fixing node vulnerabilites for docs - resolving merge conflict - update packages to resolve vulnerabilities. Switched to yarn package manager - intermin fix to resolve vulnerability issues. - proj references - fixed spelling in libraries folder name - **ci:** lockdown gh-pages workflow to sha ## Documentation - fixed nav for roadmap - updated link to feature request; added roadmap - library readme updates and minor updates ([#117](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/117)) - spell check with US-English dictionary ([#115](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/115)) - docs review ([#112](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/112)) - Reviewing documentation ([#68](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/68)) - adding auto-generated API Reference to docs ([#87](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/87)) - alternative brew installation ([#86](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/86)) - homebrew installation ([#85](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/85)) - merging api generation tasks ([#84](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/84)) - fixing docfx path ([#83](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/83)) - fix docfx path ([#82](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/82)) - fix api docs generator installation ([#81](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/81)) - update github actions to publish api docs ([#80](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/80)) - API docs generation ([#79](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/79)) ## Features - added security.md - updated record_pr action - PR Labeler GitHub actions ([#119](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/119)) - Logger output case attributes docs and unit testing ([#100](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/100)) - add extra fields to the logger methods ([#98](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/98)) - updated examples to include managed runtime configuration as well as docker. Made updates to Tracing implementation - added Tracing example - added init Metrics sample - added Logging example - added serialisation options to force dictionary keys to camel case - added build tools to generate nuget packages - updated project packaging properties - added package README files for core utilities - update make and doc dep to build docs - added docs template ## Maintenance - set global .net version - temporarily removing docs while content is being developed - added example project - refctored PowerTools to Powertools - added github files and templates - initial folder structure - interim resolution of docs package vulnerabilities - migrated AWS.Lambda.PowerTools to AWS.Lambda.PowerTools.Common namespace. fix: resolved incorrect namespace for Tracing fix: resolved dependencies in example project - refactored to new namespace - updated readme - updated github templates - added customer 404 page - removing unecessary buildtools - update pr template - updating type documentation and file headers ([#56](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/56)) - moved solution file into libraries. Need a separate solution for examples - added docs - added gitignore and updated licence - updating doc build workflow - updated issue templates - updated Label PR based on title action - deleted unnecessay publishing action - updated stale action - removed unnecessary pr labeling - updated PR labeling path - updated list of assignees - updated docker builds to use Amazon.Lambda.Tools ([#118](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/118)) - bumped .net version in global to 6.0.301 ([#120](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/120)) - bumped .net version in global to 6.0.301 - adding missed copyright info - added copyright to examples - removed SimpleLambda from examples - cleaned up logging and metrics functions - **ci:** add on_merge_pr workflow to notify releases - **ci:** lockdown untrusted workflows to sha - **ci:** add missing scripts - **ci:** updated bug report template ([#144](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/144)) - **ci:** add workflow to detect missing related issue - **ci:** enable concurrency group for docs workflow - **ci:** upudated wording in PR template checklist - **ci:** added codeowners - **ci:** added Maintainers doc - **ci:** updated PR workflows and scripts - **ci:** upgrade setup-python to v4 - **ci:** upgrade checkout action to v3 - **ci:** use untrusted workflows with sha - **ci:** add reusable export pr workflow dependency - **deps:** bump hosted-git-info from 2.8.8 to 2.8.9 in /docs - **deps:** bump object-path from 0.11.4 to 0.11.5 in /docs - **deps:** bump ssri from 6.0.1 to 6.0.2 in /docs - **deps:** bump elliptic from 6.5.3 to 6.5.4 in /docs - **deps:** bump socket.io from 2.3.0 to 2.4.1 in /docs - **deps:** bump ini from 1.3.5 to 1.3.8 in /docs - **deps:** bump ua-parser-js from 0.7.23 to 0.7.28 in /docs - **deps:** bump underscore from 1.12.0 to 1.13.1 in /docs - **deps:** bump url-parse from 1.4.7 to 1.5.1 in /docs - **deps:** bump prismjs from 1.20.0 to 1.21.0 in /docs - **deps:** updates sample deps ([#142](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/142)) - **deps:** bump mkdocs from 1.2.2 to 1.2.3 ([#29](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/29)) - **governance:** render debug logs with csharp syntax - **governance:** typo in pending release label name ## Pull Requests - Merge pull request [#139](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/139) from awslabs/amirkaws-resolve-conflicts-2 - Merge pull request [#135](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/135) from awslabs/amirkaws-update-versions - Merge pull request [#134](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/134) from heitorlessa/chore/lockdown-gh-pages-workflow - Merge pull request [#132](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/132) from heitorlessa/chore/github-concurrency-docs - Merge pull request [#130](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/130) from heitorlessa/chore/enforce-github-actions-sha - Merge pull request [#128](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/128) from sliedig/sliedig-ci - Merge pull request [#127](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/127) from sliedig/sliedig-ci - Merge pull request [#126](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/126) from sliedig/develop - Merge pull request [#123](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/123) from sliedig/develop - Merge pull request [#1](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/1) from sliedig/sliedig/develop - Merge pull request [#121](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/121) from awslabs/amirkaws-update-versions - Merge pull request [#116](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/116) from awslabs/amirkaws/add-di-support-for-logging - Merge pull request [#113](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/113) from awslabs/amirkaws/update-doc-1 - Merge pull request [#111](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/111) from awslabs/amirkaws/add-env-vars-docs - Merge pull request [#102](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/102) from sliedig/sliedig/examples - Merge pull request [#103](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/103) from awslabs/amirkaws/fix-example-issues - Merge pull request [#97](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/97) from sliedig/sliedig/examples - Merge pull request [#95](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/95) from awslabs/pr/91 - Merge pull request [#90](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/90) from sliedig/develop - Merge pull request [#89](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/89) from awslabs/api-docs-template - Merge pull request [#74](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/74) from awslabs/amirkaws/disable-tracing-outside-lambda-env - Merge pull request [#66](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/66) from sliedig/develop - Merge pull request [#59](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/59) from sliedig/develop - Merge pull request [#58](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/58) from sliedig/sliedig/nuget - Merge pull request [#32](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/32) from awslabs/amirkaws/metrics-1 - Merge pull request [#31](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/31) from t1agob/develop - Merge pull request [#2](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/2) from awslabs/develop - Merge pull request [#25](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/25) from t1agob/develop - Merge pull request [#1](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/1) from t1agob/sourcegenerators - Merge pull request [#24](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/24) from sliedig/develop - Merge pull request [#23](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/23) from sliedig/develop - Merge pull request [#22](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/22) from sliedig/develop - Merge pull request [#21](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/21) from t1agob/develop - Merge pull request [#19](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/19) from awslabs/dependabot/npm_and_yarn/docs/url-parse-1.5.1 - Merge pull request [#18](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/18) from awslabs/dependabot/npm_and_yarn/docs/hosted-git-info-2.8.9 - Merge pull request [#17](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/17) from awslabs/dependabot/npm_and_yarn/docs/ua-parser-js-0.7.28 - Merge pull request [#16](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/16) from awslabs/dependabot/npm_and_yarn/docs/underscore-1.13.1 - Merge pull request [#15](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/15) from awslabs/dependabot/npm_and_yarn/docs/ssri-6.0.2 - Merge pull request [#14](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/14) from awslabs/dependabot/npm_and_yarn/docs/elliptic-6.5.4 - Merge pull request [#13](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/13) from sliedig/develop - Merge pull request [#12](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/12) from sliedig/develop - Merge pull request [#11](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/11) from awslabs/dependabot/npm_and_yarn/docs/socket.io-2.4.1 - Merge pull request [#10](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/10) from awslabs/dependabot/npm_and_yarn/docs/ini-1.3.8 - Merge pull request [#9](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/9) from awslabs/dependabot/npm_and_yarn/docs/object-path-0.11.5 - Merge pull request [#7](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/7) from t1agob/develop - Merge pull request [#8](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/8) from sliedig/develop - Merge pull request [#5](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/5) from sliedig/develop - Merge pull request [#4](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/4) from sliedig/develop - Merge pull request [#3](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/3) from sliedig/develop - Merge pull request [#2](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/2) from awslabs/dependabot/npm_and_yarn/docs/prismjs-1.21.0 - Merge pull request [#1](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/1) from sliedig/develop ## Overview Our public roadmap outlines the high level direction we are working towards. We update this document when our priorities change: security and stability are our top priority. For most up-to-date information, see our [board of activities](https://github.com/orgs/aws-powertools/projects/6/views/14?query=is%3Aopen+sort%3Aupdated-desc). ### Key areas Security and operational excellence take precedence above all else. This means bug fixing, stability, customer's support, and internal compliance may delay one or more key areas below. **Missing something or want us to prioritize an existing area?** You can help us prioritize by [upvoting existing feature requests](https://github.com/aws-powertools/powertools-lambda-dotnet/issues?q=is%3Aissue%20state%3Aopen%20label%3Afeature-request), leaving a comment on what use cases it could unblock for you, and by joining our discussions on Discord. ### Core Utilities (P0) #### Logging V2 Modernizing our logging capabilities to align with .NET practices and improve developer experience. - Logger buffer implementation - New .NET-friendly API design ILogger and LoggerFactory support - Filtering and JMESPath expression support - Message templates #### Metrics V2 Updating metrics implementation to support latest EMF specifications and improve performance. - Update to latest EMF specifications - Breaking changes implementation for multiple dimensions - Add support for default dimensions on ColdStart metric - API updates - missing functionality that is present in Python implementation (ie: flush_metrics) ### Security and Production Readiness (P1) Ensuring enterprise-grade security and compatibility with latest .NET developments. - .NET 10 support from day one - Deprecation path for .NET 6 - Scorecard implementation - Security compliance checks on our pipeline - All utilities with end-to-end tests in our pipeline ### Feature Parity and ASP.NET Support (P2) #### Feature Parity Implementing key features to achieve parity with other Powertools implementations. - Data masking - Feature Flags - S3 Streaming support #### ASP.NET Support Adding first-class support for ASP.NET Core in Lambda with performance considerations. - AspNetCoreServer.Hosting - [Tracking issue](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/360) - Minimal APIs support - ASP.NET Core integration - Documentation for cold start impacts - Clear guidance on Middleware vs. Decorators usage #### Improve operational excellence We continue to work on increasing operational excellence to remove as much undifferentiated heavylifting for maintainers, so that we can focus on delivering features that help you. This means improving our automation workflows, and project management, and test coverage. ## Roadmap status definition ``` graph LR Ideas --> Backlog --> Work["Working on it"] --> Merged["Coming soon"] --> Shipped ``` *Visual representation* Within our [public board](https://github.com/orgs/aws-powertools/projects/6/views/4?query=is%3Aopen+sort%3Aupdated-desc), you'll see the following values in the `Status` column: - **Ideas**. Incoming and existing feature requests that are not being actively considered yet. These will be reviewed when bandwidth permits and based on demand. - **Backlog**. Accepted feature requests or enhancements that we want to work on. - **Working on it**. Features or enhancements we're currently either researching or implementing it. - **Coming soon**. Any feature, enhancement, or bug fixes that have been merged and are coming in the next release. - **Shipped**. Features or enhancements that are now available in the most recent release. - **On hold**. Features or items that are currently blocked until further notice. - **Pending review**. Features which implementation is mostly completed, but need review and some additional iterations. > Tasks or issues with empty `Status` will be categorized in upcoming review cycles. ## Process ``` graph LR PFR[Feature request] --> Triage{Need RFC?} Triage --> |Complex/major change or new utility?| RFC[Ask or write RFC] --> Approval{Approved?} Triage --> |Minor feature or enhancement?| NoRFC[No RFC required] --> Approval Approval --> |Yes| Backlog Approval --> |No | Reject["Inform next steps"] Backlog --> |Prioritized| Implementation Backlog --> |Defer| WelcomeContributions["help-wanted label"] ``` *Visual representation* Our end-to-end mechanism follows four major steps: - **Feature Request**. Ideas start with a [feature request](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/new?assignees=&labels=feature-request%2Ctriage&projects=&template=feature_request.yml&title=Feature+request%3A+TITLE) to outline their use case at a high level. For complex use cases, maintainers might ask for/write a RFC. - Maintainers review requests based on [project tenets](../#tenets), customers reaction (👍), and use cases. - **Request-for-comments (RFC)**. Design proposals use our [RFC issue template](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/new?assignees=&labels=RFC%2Ctriage&projects=&template=rfc.yml&title=RFC%3A+TITLE) to describe its implementation, challenges, developer experience, dependencies, and alternative solutions. - This helps refine the initial idea with community feedback before a decision is made. - **Decision**. After carefully reviewing and discussing them, maintainers make a final decision on whether to start implementation, defer or reject it, and update everyone with the next steps. - **Implementation**. For approved features, maintainers give priority to the original authors for implementation unless it is a sensitive task that is best handled by maintainers. See [Maintainers](https://github.com/aws-powertools/powertools-lambda-dotnet/blob/develop/MAINTAINERS.md) document to understand how we triage issues and pull requests, labels and governance. ## Disclaimer The Powertools for AWS Lambda team values feedback and guidance from its community of users, although final decisions on inclusion into the project will be made by AWS. We determine the high-level direction for our open roadmap based on customer feedback and popularity (👍🏽 and comments), security and operational impacts, and business value. Where features don’t meet our goals and longer-term strategy, we will communicate that clearly and openly as quickly as possible with an explanation of why the decision was made. ## FAQs **Q: Why did you build this?** A: We know that our customers are making decisions and plans based on what we are developing, and we want to provide our customers the insights they need to plan. **Q: Why are there no dates on your roadmap?** A: Because job zero is security and operational stability, we can't provide specific target dates for features. The roadmap is subject to change at any time, and roadmap issues in this repository do not guarantee a feature will be launched as proposed. **Q: How can I provide feedback or ask for more information?** A: For existing features, you can directly comment on issues. For anything else, please open an issue. # Core Utilities The logging utility provides a Lambda optimized logger with output structured as JSON. ## Key features - Capture key fields from Lambda context, cold start and structures logging output as JSON - Log Lambda event when instructed (disabled by default) - Log sampling enables DEBUG log level for a percentage of requests (disabled by default) - Append additional keys to structured log at any point in time - Ahead-of-Time compilation to native code support [AOT](https://docs.aws.amazon.com/lambda/latest/dg/dotnet-native-aot.html) - Custom log formatter to override default log structure - Support for [AWS Lambda Advanced Logging Controls (ALC)](https://docs.aws.amazon.com/lambda/latest/dg/monitoring-cloudwatchlogs-advanced.html) - Support for Microsoft.Extensions.Logging and [ILogger](https://learn.microsoft.com/en-us/dotnet/api/microsoft.extensions.logging.ilogger?view=dotnet-plat-ext-7.0) interface - Support for [ILoggerFactory](https://learn.microsoft.com/en-us/dotnet/api/microsoft.extensions.logging.iloggerfactory?view=dotnet-plat-ext-7.0) interface - Support for message templates `{}` and `{@}` for structured logging ## Breaking changes from v1 (dependency updates) Info Loooking for V1 specific documentation please go to [Logging v1](/lambda/dotnet/core/logging-v1) | Change | Before (v1.x) | After (v2.0) | Migration Action | | --- | --- | --- | --- | | Amazon.Lambda.Core | 2.2.0 | 2.5.0 | dotnet add package Amazon.Lambda.Core | | Amazon.Lambda.Serialization.SystemTextJson | 2.4.3 | 2.4.4 | dotnet add package Amazon.Lambda.Serialization.SystemTextJson | | Microsoft.Extensions.DependencyInjection | 8.0.0 | 8.0.1 | dotnet add package Microsoft.Extensions.DependencyInjection | #### Extra keys - Breaking change In v1.x, the extra keys were added to the log entry as a dictionary. In v2.x, the extra keys are added to the log entry as a JSON object. There is no longer a method that accepts extra keys as first argument. ``` public class User { public string Name { get; set; } public int Age { get; set; } } Logger.LogInformation(user, "{Name} is {Age} years old", new object[]{user.Name, user.Age}); var scopeKeys = new { PropOne = "Value 1", PropTwo = "Value 2" }; Logger.LogInformation(scopeKeys, "message"); ``` ``` public class User { public string Name { get; set; } public int Age { get; set; } public override string ToString() { return $"{Name} is {Age} years old"; } } // It uses the ToString() method of the object to log the message // the extra keys are added because of the {@} in the message template Logger.LogInformation("{@user}", user); var scopeKeys = new { PropOne = "Value 1", PropTwo = "Value 2" }; // there is no longer a method that accepts extra keys as first argument. Logger.LogInformation("{@keys}", scopeKeys); ``` This change was made to improve the performance of the logger and to make it easier to work with the extra keys. ## Installation Powertools for AWS Lambda (.NET) are available as NuGet packages. You can install the packages from [NuGet Gallery](https://www.nuget.org/packages?q=AWS+Lambda+Powertools*) or from Visual Studio editor by searching `AWS.Lambda.Powertools*` to see various utilities available. - [AWS.Lambda.Powertools.Logging](https://www.nuget.org/packages?q=AWS.Lambda.Powertools.Logging): `dotnet add package AWS.Lambda.Powertools.Logging` ## Getting started Info AOT Support If loooking for AOT specific configurations navigate to the [AOT section](#aot-support) Logging requires two settings: | Setting | Description | Environment variable | Attribute parameter | | --- | --- | --- | --- | | **Service** | Sets **Service** key that will be present across all log statements | `POWERTOOLS_SERVICE_NAME` | `Service` | | **Logging level** | Sets how verbose Logger should be (Information, by default) | `POWERTOOLS_LOG_LEVEL` | `LogLevel` | ### Full list of environment variables | Environment variable | Description | Default | | --- | --- | --- | | **POWERTOOLS_SERVICE_NAME** | Sets service name used for tracing namespace, metrics dimension and structured logging | `"service_undefined"` | | **POWERTOOLS_LOG_LEVEL** | Sets logging level | `Information` | | **POWERTOOLS_LOGGER_CASE** | Override the default casing for log keys | `SnakeCase` | | **POWERTOOLS_LOGGER_LOG_EVENT** | Logs incoming event | `false` | | **POWERTOOLS_LOGGER_SAMPLE_RATE** | Debug log sampling | `0` | ### Setting up the logger You can set up the logger in different ways. The most common way is to use the `Logging` attribute on your Lambda. You can also use the `ILogger` interface to log messages. This interface is part of the Microsoft.Extensions.Logging. ``` /** * Handler for requests to Lambda function. */ public class Function { [Logging(Service = "payment", LogLevel = LogLevel.Debug)] public async Task FunctionHandler (APIGatewayProxyRequest apigProxyEvent, ILambdaContext context) { Logger.LogInformation("Collecting payment"); ... } } ``` ``` /** * Handler for requests to Lambda function. */ public class Function { private readonly ILogger _logger; public Function(ILoggerFactory loggerFactory) { _logger = loggerFactory.Create(builder => { builder.AddPowertoolsLogger(config => { config.Service = "TestService"; config.LoggerOutputCase = LoggerOutputCase.PascalCase; }); }).CreatePowertoolsLogger(); } public async Task FunctionHandler (APIGatewayProxyRequest apigProxyEvent, ILambdaContext context) { _logger.LogInformation("Collecting payment"); ... } } ``` ``` /** * Handler for requests to Lambda function. */ public class Function { private readonly ILogger _logger; public Function(ILogger logger) { _logger = logger ?? new PowertoolsLoggerBuilder() .WithService("TestService") .WithOutputCase(LoggerOutputCase.PascalCase) .Build(); } public async Task FunctionHandler (APIGatewayProxyRequest apigProxyEvent, ILambdaContext context) { _logger.LogInformation("Collecting payment"); ... } } ``` ### Customizing the logger You can customize the logger by setting the following properties in the `Logger.Configure` method: | Property | Description | | --- | --- | | `Service` | The name of the service. This is used to identify the service in the logs. | | `MinimumLogLevel` | The minimum log level to log. This is used to filter out logs below the specified level. | | `LogFormatter` | The log formatter to use. This is used to customize the structure of the log entries. | | `JsonOptions` | The JSON options to use. This is used to customize the serialization of logs. | | `LogBuffering` | The log buffering options. This is used to configure log buffering. | | `TimestampFormat` | The format of the timestamp. This is used to customize the format of the timestamp in the logs. | | `SamplingRate` | Sets a percentage (0.0 to 1.0) of logs that will be dynamically elevated to DEBUG level | | `LoggerOutputCase` | The output casing of the logger. This is used to customize the casing of the log entries. | | `LogOutput` | Specifies the console output wrapper used for writing logs. This property allows redirecting log output for testing or specialized handling scenarios. | ### Configuration You can configure Powertools Logger using the static `Logger` class. This class is a singleton and is created when the Lambda function is initialized. You can configure the logger using the `Logger.Configure` method. ``` public class Function { public Function() { Logger.Configure(options => { options.MinimumLogLevel = LogLevel.Information; options.LoggerOutputCase = LoggerOutputCase.CamelCase; }); } public async Task FunctionHandler (APIGatewayProxyRequest apigProxyEvent, ILambdaContext context) { Logger.LogInformation("Collecting payment"); ... } } ``` ### ILogger You can also use the `ILogger` interface to log messages. This interface is part of the Microsoft.Extensions.Logging. With this approach you get more flexibility and testability using dependency injection (DI). ``` public class Function { public Function(ILogger logger) { _logger = logger ?? LoggerFactory.Create(builder => { builder.AddPowertoolsLogger(config => { config.Service = "TestService"; config.LoggerOutputCase = LoggerOutputCase.PascalCase; }); }).CreatePowertoolsLogger(); } public async Task FunctionHandler (APIGatewayProxyRequest apigProxyEvent, ILambdaContext context) { Logger.LogInformation("Collecting payment"); ... } } ``` ## Standard structured keys Your logs will always include the following keys to your structured logging: | Key | Type | Example | Description | | --- | --- | --- | --- | | **Level** | string | "Information" | Logging level | | **Message** | string | "Collecting payment" | Log statement value. Unserializable JSON values will be cast to string | | **Timestamp** | string | "2020-05-24 18:17:33,774" | Timestamp of actual log statement | | **Service** | string | "payment" | Service name defined. "service_undefined" will be used if unknown | | **ColdStart** | bool | true | ColdStart value. | | **FunctionName** | string | "example-powertools-HelloWorldFunction-1P1Z6B39FLU73" | | | **FunctionMemorySize** | string | "128" | | | **FunctionArn** | string | "arn:aws:lambda:eu-west-1:012345678910:function:example-powertools-HelloWorldFunction-1P1Z6B39FLU73" | | | **FunctionRequestId** | string | "899856cb-83d1-40d7-8611-9e78f15f32f4" | AWS Request ID from lambda context | | **FunctionVersion** | string | "12" | | | **XRayTraceId** | string | "1-5759e988-bd862e3fe1be46a994272793" | X-Ray Trace ID when Lambda function has enabled Tracing | | **Name** | string | "Powertools for AWS Lambda (.NET) Logger" | Logger name | | **SamplingRate** | int | 0.1 | Debug logging sampling rate in percentage e.g. 10% in this case | | **Customer Keys** | | | | Warning If you emit a log message with a key that matches one of `level`, `message`, `name`, `service`, or `timestamp`, the Logger will ignore the key. ## Message templates You can use message templates to extract properties from your objects and log them as structured data. Info Override the `ToString()` method of your object to return a meaningful string representation of the object. This is especially important when using `{}` to log the object as a string. ``` public class User { public string FirstName { get; set; } public string LastName { get; set; } public int Age { get; set; } public override string ToString() { return $"{LastName}, {FirstName} ({Age})"; } } ``` If you want to log the object as a JSON object, use `{@}`. This will serialize the object and log it as a JSON object. ``` public class Function { [Logging(Service = "user-service", LogLevel = LogLevel.Information)] public async Task FunctionHandler (APIGatewayProxyRequest apigProxyEvent, ILambdaContext context) { var user = new User { FirstName = "John", LastName = "Doe", Age = 42 }; logger.LogInformation("User object: {@user}", user); ... } } ``` ``` { "level": "Information", "message": "User object: Doe, John (42)", "timestamp": "2025-04-07 09:06:30.708", "service": "user-service", "coldStart": true, "name": "AWS.Lambda.Powertools.Logging.Logger", "user": { "firstName": "John", "lastName": "Doe", "age": 42 }, ... } ``` If you want to log the object as a string, use `{}`. This will call the `ToString()` method of the object and log it as a string. ``` public class Function { [Logging(Service = "user", LogLevel = LogLevel.Information)] public async Task FunctionHandler (APIGatewayProxyRequest apigProxyEvent, ILambdaContext context) { var user = new User { FirstName = "John", LastName = "Doe", Age = 42 }; logger.LogInformation("User data: {user}", user); // Also works with numbers, dates, etc. logger.LogInformation("Price: {price:0.00}", 123.4567); // will respect decimal places logger.LogInformation("Percentage: {percent:0.0%}", 0.1234); ... } } ``` ``` { "level": "Information", "message": "User data: Doe, John (42)", "timestamp": "2025-04-07 09:06:30.689", "service": "user-servoice", "coldStart": true, "name": "AWS.Lambda.Powertools.Logging.Logger", "user": "Doe, John (42)" } { "level": "Information", "message": "Price: 123.46", "timestamp": "2025-04-07 09:23:01.235", "service": "user-servoice", "cold_start": true, "name": "AWS.Lambda.Powertools.Logging.Logger", "price": 123.46 } { "level": "Information", "message": "Percentage: 12.3%", "timestamp": "2025-04-07 09:23:01.260", "service": "user-servoice", "cold_start": true, "name": "AWS.Lambda.Powertools.Logging.Logger", "percent": "12.3%" } ``` ## Logging incoming event When debugging in non-production environments, you can instruct Logger to log the incoming event with `LogEvent` parameter or via `POWERTOOLS_LOGGER_LOG_EVENT` environment variable. Warning Log event is disabled by default to prevent sensitive info being logged. ``` /** * Handler for requests to Lambda function. */ public class Function { [Logging(LogEvent = true)] public async Task FunctionHandler (APIGatewayProxyRequest apigProxyEvent, ILambdaContext context) { ... } } ``` ## Setting a Correlation ID You can set a Correlation ID using `CorrelationIdPath` parameter by passing a [JSON Pointer expression](https://datatracker.ietf.org/doc/html/draft-ietf-appsawg-json-pointer-03). Attention The JSON Pointer expression is `case sensitive`. In the bellow example `/headers/my_request_id_header` would work but `/Headers/my_request_id_header` would not find the element. ``` /** * Handler for requests to Lambda function. */ public class Function { [Logging(CorrelationIdPath = "/headers/my_request_id_header")] public async Task FunctionHandler (APIGatewayProxyRequest apigProxyEvent, ILambdaContext context) { ... } } ``` ``` { "headers": { "my_request_id_header": "correlation_id_value" } } ``` ``` { "level": "Information", "message": "Collecting payment", "timestamp": "2021-12-13T20:32:22.5774262Z", "service": "lambda-example", "cold_start": true, "function_name": "test", "function_memory_size": 128, "function_arn": "arn:aws:lambda:eu-west-1:12345678910:function:test", "function_request_id": "52fdfc07-2182-154f-163f-5f0f9a621d72", "function_version": "$LATEST", "xray_trace_id": "1-61b7add4-66532bb81441e1b060389429", "name": "AWS.Lambda.Powertools.Logging.Logger", "sampling_rate": 0.7, "correlation_id": "correlation_id_value", } ``` We provide [built-in JSON Pointer expression](https://datatracker.ietf.org/doc/html/draft-ietf-appsawg-json-pointer-03) {target="\_blank"} for known event sources, where either a request ID or X-Ray Trace ID are present. ``` /** * Handler for requests to Lambda function. */ public class Function { [Logging(CorrelationIdPath = CorrelationIdPaths.ApiGatewayRest)] public async Task FunctionHandler (APIGatewayProxyRequest apigProxyEvent, ILambdaContext context) { ... } } ``` ``` { "RequestContext": { "RequestId": "correlation_id_value" } } ``` ``` { "level": "Information", "message": "Collecting payment", "timestamp": "2021-12-13T20:32:22.5774262Z", "service": "lambda-example", "cold_start": true, "function_name": "test", "function_memory_size": 128, "function_arn": "arn:aws:lambda:eu-west-1:12345678910:function:test", "function_request_id": "52fdfc07-2182-154f-163f-5f0f9a621d72", "function_version": "$LATEST", "xray_trace_id": "1-61b7add4-66532bb81441e1b060389429", "name": "AWS.Lambda.Powertools.Logging.Logger", "sampling_rate": 0.7, "correlation_id": "correlation_id_value", } ``` ## Appending additional keys Custom keys are persisted across warm invocations Always set additional keys as part of your handler to ensure they have the latest value, or explicitly clear them with [`ClearState=true`](#clearing-all-state). You can append your own keys to your existing logs via `AppendKey`. Typically this value would be passed into the function via the event. Appended keys are added to all subsequent log entries in the current execution from the point the logger method is called. To ensure the key is added to all log entries, call this method as early as possible in the Lambda handler. ``` /** * Handler for requests to Lambda function. */ public class Function { [Logging(LogEvent = true)] public async Task FunctionHandler(APIGatewayProxyRequest apigwProxyEvent, ILambdaContext context) { var requestContextRequestId = apigwProxyEvent.RequestContext.RequestId; var lookupInfo = new Dictionary() { {"LookupInfo", new Dictionary{{ "LookupId", requestContextRequestId }}} }; // Appended keys are added to all subsequent log entries in the current execution. // Call this method as early as possible in the Lambda handler. // Typically this is value would be passed into the function via the event. // Set the ClearState = true to force the removal of keys across invocations, Logger.AppendKeys(lookupInfo); Logger.LogInformation("Getting ip address from external service"); } ``` ``` { "level": "Information", "message": "Getting ip address from external service" "timestamp": "2022-03-14T07:25:20.9418065Z", "service": "powertools-dotnet-logging-sample", "cold_start": false, "function_name": "PowertoolsLoggingSample-HelloWorldFunction-hm1r10VT3lCy", "function_memory_size": 256, "function_arn": "arn:aws:lambda:function:PowertoolsLoggingSample-HelloWorldFunction-hm1r10VT3lCy", "function_request_id": "96570b2c-f00e-471c-94ad-b25e95ba7347", "function_version": "$LATEST", "xray_trace_id": "1-622eede0-647960c56a91f3b071a9fff1", "name": "AWS.Lambda.Powertools.Logging.Logger", "lookup_info": { "lookup_id": "4c50eace-8b1e-43d3-92ba-0efacf5d1625" }, } ``` ### Removing additional keys You can remove any additional key from entry using `Logger.RemoveKeys()`. ``` /** * Handler for requests to Lambda function. */ public class Function { [Logging(LogEvent = true)] public async Task FunctionHandler (APIGatewayProxyRequest apigProxyEvent, ILambdaContext context) { ... Logger.AppendKey("test", "willBeLogged"); ... var customKeys = new Dictionary { {"test1", "value1"}, {"test2", "value2"} }; Logger.AppendKeys(customKeys); ... Logger.RemoveKeys("test"); Logger.RemoveKeys("test1", "test2"); ... } } ``` ## Extra Keys Extra keys allow you to append additional keys to a log entry. Unlike `AppendKey`, extra keys will only apply to the current log entry. Extra keys argument is available for all log levels' methods, as implemented in the standard logging library - e.g. Logger.Information, Logger.Warning. It accepts any dictionary, and all keyword arguments will be added as part of the root structure of the logs for that log statement. Info Any keyword argument added using extra keys will not be persisted for subsequent messages. ``` /** * Handler for requests to Lambda function. */ public class Function { [Logging(LogEvent = true)] public async Task FunctionHandler(APIGatewayProxyRequest apigwProxyEvent, ILambdaContext context) { var requestContextRequestId = apigwProxyEvent.RequestContext.RequestId; var lookupId = new Dictionary() { { "LookupId", requestContextRequestId } }; // Appended keys are added to all subsequent log entries in the current execution. // Call this method as early as possible in the Lambda handler. // Typically this is value would be passed into the function via the event. // Set the ClearState = true to force the removal of keys across invocations, Logger.AppendKeys(lookupId); } ``` ### Clearing all state Logger is commonly initialized in the global scope. Due to [Lambda Execution Context reuse](https://docs.aws.amazon.com/lambda/latest/dg/runtimes-context.html), this means that custom keys can be persisted across invocations. If you want all custom keys to be deleted, you can use `ClearState=true` attribute on `[Logging]` attribute. ``` /** * Handler for requests to Lambda function. */ public class Function { [Logging(ClearState = true)] public async Task FunctionHandler (APIGatewayProxyRequest apigProxyEvent, ILambdaContext context) { ... if (apigProxyEvent.Headers.ContainsKey("SomeSpecialHeader")) { Logger.AppendKey("SpecialKey", "value"); } Logger.LogInformation("Collecting payment"); ... } } ``` ``` { "level": "Information", "message": "Collecting payment", "timestamp": "2021-12-13T20:32:22.5774262Z", "service": "payment", "cold_start": true, "function_name": "test", "function_memory_size": 128, "function_arn": "arn:aws:lambda:eu-west-1:12345678910:function:test", "function_request_id": "52fdfc07-2182-154f-163f-5f0f9a621d72", "special_key": "value" } ``` ``` { "level": "Information", "message": "Collecting payment", "timestamp": "2021-12-13T20:32:22.5774262Z", "service": "payment", "cold_start": true, "function_name": "test", "function_memory_size": 128, "function_arn": "arn:aws:lambda:eu-west-1:12345678910:function:test", "function_request_id": "52fdfc07-2182-154f-163f-5f0f9a621d72" } ``` ## Sampling debug logs You can dynamically set a percentage of your logs to **DEBUG** level via env var `POWERTOOLS_LOGGER_SAMPLE_RATE` or via `SamplingRate` parameter on attribute. Info Configuration on environment variable is given precedence over sampling rate configuration on attribute, provided it's in valid value range. ``` /** * Handler for requests to Lambda function. */ public class Function { [Logging(SamplingRate = 0.5)] public async Task FunctionHandler (APIGatewayProxyRequest apigProxyEvent, ILambdaContext context) { ... } } ``` ``` Resources: HelloWorldFunction: Type: AWS::Serverless::Function Properties: ... Environment: Variables: POWERTOOLS_LOGGER_SAMPLE_RATE: 0.5 ``` ## Configure Log Output Casing By definition Powertools for AWS Lambda (.NET) outputs logging keys using **snake case** (e.g. *"function_memory_size": 128*). This allows developers using different Powertools for AWS Lambda (.NET) runtimes, to search logs across services written in languages such as Python or TypeScript. If you want to override the default behavior you can either set the desired casing through attributes, as described in the example below, or by setting the `POWERTOOLS_LOGGER_CASE` environment variable on your AWS Lambda function. Allowed values are: `CamelCase`, `PascalCase` and `SnakeCase`. ``` /** * Handler for requests to Lambda function. */ public class Function { [Logging(LoggerOutputCase = LoggerOutputCase.CamelCase)] public async Task FunctionHandler (APIGatewayProxyRequest apigProxyEvent, ILambdaContext context) { ... } } ``` Below are some output examples for different casing. ``` { "level": "Information", "message": "Collecting payment", "timestamp": "2021-12-13T20:32:22.5774262Z", "service": "payment", "coldStart": true, "functionName": "test", "functionMemorySize": 128, "functionArn": "arn:aws:lambda:eu-west-1:12345678910:function:test", "functionRequestId": "52fdfc07-2182-154f-163f-5f0f9a621d72" } ``` ``` { "Level": "Information", "Message": "Collecting payment", "Timestamp": "2021-12-13T20:32:22.5774262Z", "Service": "payment", "ColdStart": true, "FunctionName": "test", "FunctionMemorySize": 128, "FunctionArn": "arn:aws:lambda:eu-west-1:12345678910:function:test", "FunctionRequestId": "52fdfc07-2182-154f-163f-5f0f9a621d72" } ``` ``` { "level": "Information", "message": "Collecting payment", "timestamp": "2021-12-13T20:32:22.5774262Z", "service": "payment", "cold_start": true, "function_name": "test", "function_memory_size": 128, "function_arn": "arn:aws:lambda:eu-west-1:12345678910:function:test", "function_request_id": "52fdfc07-2182-154f-163f-5f0f9a621d72" } ``` ## Advanced ### Log Levels The default log level is `Information` and can be set using the `MinimumLogLevel` property option or by using the `POWERTOOLS_LOG_LEVEL` environment variable. We support the following log levels: | Level | Numeric value | Lambda Level | | --- | --- | --- | | `Trace` | 0 | `trace` | | `Debug` | 1 | `debug` | | `Information` | 2 | `info` | | `Warning` | 3 | `warn` | | `Error` | 4 | `error` | | `Critical` | 5 | `fatal` | | `None` | 6 | | ### Using AWS Lambda Advanced Logging Controls (ALC) When is it useful? When you want to set a logging policy to drop informational or verbose logs for one or all AWS Lambda functions, regardless of runtime and logger used. With [AWS Lambda Advanced Logging Controls (ALC)](https://docs.aws.amazon.com/lambda/latest/dg/monitoring-cloudwatchlogs.html#monitoring-cloudwatchlogs-advanced) {target="\_blank"}, you can enforce a minimum log level that Lambda will accept from your application code. When enabled, you should keep `Logger` and ALC log level in sync to avoid data loss. When using AWS Lambda Advanced Logging Controls (ALC) - When Powertools Logger output is set to `PascalCase` **`Level`** property name will be replaced by **`LogLevel`** as a property name. - ALC takes precedence over **`POWERTOOLS_LOG_LEVEL`** and when setting it in code using **`[Logging(LogLevel = )]`** Here's a sequence diagram to demonstrate how ALC will drop both `Information` and `Debug` logs emitted from `Logger`, when ALC log level is stricter than `Logger`. ``` sequenceDiagram title Lambda ALC allows WARN logs only participant Lambda service participant Lambda function participant Application Logger Note over Lambda service: AWS_LAMBDA_LOG_LEVEL="WARN" Note over Application Logger: POWERTOOLS_LOG_LEVEL="DEBUG" Lambda service->>Lambda function: Invoke (event) Lambda function->>Lambda function: Calls handler Lambda function->>Application Logger: Logger.Warning("Something happened") Lambda function-->>Application Logger: Logger.Debug("Something happened") Lambda function-->>Application Logger: Logger.Information("Something happened") Lambda service->>Lambda service: DROP INFO and DEBUG logs Lambda service->>CloudWatch Logs: Ingest error logs ``` **Priority of log level settings in Powertools for AWS Lambda** We prioritise log level settings in this order: 1. AWS_LAMBDA_LOG_LEVEL environment variable 1. Setting the log level in code using `[Logging(LogLevel = )]` 1. POWERTOOLS_LOG_LEVEL environment variable If you set `Logger` level lower than ALC, we will emit a warning informing you that your messages will be discarded by Lambda. > **NOTE** With ALC enabled, we are unable to increase the minimum log level below the `AWS_LAMBDA_LOG_LEVEL` environment variable value, see [AWS Lambda service documentation](https://docs.aws.amazon.com/lambda/latest/dg/monitoring-cloudwatchlogs.html#monitoring-cloudwatchlogs-log-level) {target="\_blank"} for more details. ### Using JsonSerializerOptions Powertools supports customizing the serialization and deserialization of Lambda JSON events and your own types using `JsonSerializerOptions`. You can do this by creating a custom `JsonSerializerOptions` and passing it to the `JsonOptions` of the Powertools Logger. Supports `TypeInfoResolver` and `DictionaryKeyPolicy` options. These two options are the most common ones used to customize the serialization of Powertools Logger. - `TypeInfoResolver`: This option allows you to specify a custom `JsonSerializerContext` that contains the types you want to serialize and deserialize. This is especially useful when using AOT compilation, as it allows you to specify the types that should be included in the generated assembly. - `DictionaryKeyPolicy`: This option allows you to specify a custom naming policy for the properties in the JSON output. This is useful when you want to change the casing of the property names or use a different naming convention. Info If you want to preserve the original casing of the property names (keys), you can set the `DictionaryKeyPolicy` to `null`. ``` builder.Logging.AddPowertoolsLogger(options => { options.JsonOptions = new JsonSerializerOptions { DictionaryKeyPolicy = JsonNamingPolicy.CamelCase, // Override output casing TypeInfoResolver = MyCustomJsonSerializerContext.Default // Your custom JsonSerializerContext }; }); ``` ### Custom Log formatter (Bring Your Own Formatter) You can customize the structure (keys and values) of your log entries by implementing a custom log formatter and override default log formatter using `LogFormatter` property in the `configure` options. You can implement a custom log formatter by inheriting the `ILogFormatter` class and implementing the `object FormatLogEntry(LogEntry logEntry)` method. ``` /** * Handler for requests to Lambda function. */ public class Function { /// /// Function constructor /// public Function() { Logger.Configure(options => { options.LogFormatter = new CustomLogFormatter(); }); } [Logging(CorrelationIdPath = "/headers/my_request_id_header", SamplingRate = 0.7)] public async Task FunctionHandler (APIGatewayProxyRequest apigProxyEvent, ILambdaContext context) { ... } } ``` ``` public class CustomLogFormatter : ILogFormatter { public object FormatLogEntry(LogEntry logEntry) { return new { Message = logEntry.Message, Service = logEntry.Service, CorrelationIds = new { AwsRequestId = logEntry.LambdaContext?.AwsRequestId, XRayTraceId = logEntry.XRayTraceId, CorrelationId = logEntry.CorrelationId }, LambdaFunction = new { Name = logEntry.LambdaContext?.FunctionName, Arn = logEntry.LambdaContext?.InvokedFunctionArn, MemoryLimitInMB = logEntry.LambdaContext?.MemoryLimitInMB, Version = logEntry.LambdaContext?.FunctionVersion, ColdStart = logEntry.ColdStart, }, Level = logEntry.Level.ToString(), Timestamp = logEntry.Timestamp.ToString("o"), Logger = new { Name = logEntry.Name, SampleRate = logEntry.SamplingRate }, }; } } ``` ``` { "Message": "Test Message", "Service": "lambda-example", "CorrelationIds": { "AwsRequestId": "52fdfc07-2182-154f-163f-5f0f9a621d72", "XRayTraceId": "1-61b7add4-66532bb81441e1b060389429", "CorrelationId": "correlation_id_value" }, "LambdaFunction": { "Name": "test", "Arn": "arn:aws:lambda:eu-west-1:12345678910:function:test", "MemorySize": 128, "Version": "$LATEST", "ColdStart": true }, "Level": "Information", "Timestamp": "2021-12-13T20:32:22.5774262Z", "Logger": { "Name": "AWS.Lambda.Powertools.Logging.Logger", "SampleRate": 0.7 } } ``` ### Buffering logs Log buffering enables you to buffer logs for a specific request or invocation. Enable log buffering by passing `LogBufferingOptions` when configuring a Logger instance. You can buffer logs at the `Warning`, `Information`, `Debug` or `Trace` level, and flush them automatically on error or manually as needed. This is useful when you want to reduce the number of log messages emitted while still having detailed logs when needed, such as when troubleshooting issues. ``` public class Function { public Function() { Logger.Configure(logger => { logger.Service = "MyServiceName"; logger.LogBuffering = new LogBufferingOptions { BufferAtLogLevel = LogLevel.Debug, MaxBytes = 20480, // Default is 20KB (20480 bytes) FlushOnErrorLog = true // default true }; }); Logger.LogDebug('This is a debug message'); // This is NOT buffered } [Logging] public async Task FunctionHandler (APIGatewayProxyRequest apigProxyEvent, ILambdaContext context) { Logger.LogDebug('This is a debug message'); // This is buffered Logger.LogInformation('This is an info message'); // your business logic here Logger.LogError('This is an error message'); // This also flushes the buffer } } ``` #### Configuring the buffer When configuring the buffer, you can set the following options to fine-tune how logs are captured, stored, and emitted. You can configure the following options in the `logBufferOptions` constructor parameter: | Parameter | Description | Configuration | Default | | --- | --- | --- | --- | | `MaxBytes` | Maximum size of the log buffer in bytes | `number` | `20480` | | `BufferAtLogLevel` | Minimum log level to buffer | `Trace`, `Debug`, `Information`, `Warning` | `Debug` | | `FlushOnErrorLog` | Automatically flush buffer when logging an error | `True`, `False` | `True` | ``` public class Function { public Function() { Logger.Configure(logger => { logger.Service = "MyServiceName"; logger.LogBuffering = new LogBufferingOptions { BufferAtLogLevel = LogLevel.Warning }; }); } [Logging] public async Task FunctionHandler (APIGatewayProxyRequest apigProxyEvent, ILambdaContext context) { // All logs below are buffered Logger.LogDebug('This is a debug message'); Logger.LogInformation('This is an info message'); Logger.LogWarning('This is a warn message'); Logger.ClearBuffer(); // This will clear the buffer without emitting the logs } } ``` 1. Setting `BufferAtLogLevel: 'Warning'` configures log buffering for `Warning` and all lower severity levels like `Information`, `Debug`, and `Trace`. 1. Calling `Logger.ClearBuffer()` will clear the buffer without emitting the logs. ``` public class Function { public Function() { Logger.Configure(logger => { logger.Service = "MyServiceName"; logger.LogBuffering = new LogBufferingOptions { FlushOnErrorLog = false }; }); } [Logging] public async Task FunctionHandler (APIGatewayProxyRequest apigProxyEvent, ILambdaContext context) { Logger.LogDebug('This is a debug message'); // this is buffered try { throw new Exception(); } catch (Exception e) { Logger.LogError(e.Message); // this does NOT flush the buffer } Logger.LogDebug("Debug!!"); // this is buffered try { throw new Exception(); } catch (Exception e) { Logger.LogError(e.Message); // this does NOT flush the buffer Logger.FlushBuffer(); // Manually flush } } } ``` 1. Disabling `FlushOnErrorLog` will not flush the buffer when logging an error. This is useful when you want to control when the buffer is flushed by calling the `Logger.FlushBuffer()` method. #### Flushing on errors When using the `Logger` decorator, you can configure the logger to automatically flush the buffer when an error occurs. This is done by setting the `FlushBufferOnUncaughtError` option to `true` in the decorator. ``` public class Function { public Function() { Logger.Configure(logger => { logger.Service = "MyServiceName"; logger.LogBuffering = new LogBufferingOptions { BufferAtLogLevel = LogLevel.Debug }; }); } [Logging(FlushBufferOnUncaughtError = true)] public async Task FunctionHandler (APIGatewayProxyRequest apigProxyEvent, ILambdaContext context) { Logger.LogDebug('This is a debug message'); throw new Exception(); // This causes the buffer to be flushed } } ``` #### Buffering workflows ##### Manual flush ``` sequenceDiagram participant Client participant Lambda participant Logger participant CloudWatch Client->>Lambda: Invoke Lambda Lambda->>Logger: Initialize with DEBUG level buffering Logger-->>Lambda: Logger buffer ready Lambda->>Logger: Logger.LogDebug("First debug log") Logger-->>Logger: Buffer first debug log Lambda->>Logger: Logger.LogInformation("Info log") Logger->>CloudWatch: Directly log info message Lambda->>Logger: Logger.LogDebug("Second debug log") Logger-->>Logger: Buffer second debug log Lambda->>Logger: Logger.FlushBuffer() Logger->>CloudWatch: Emit buffered logs to stdout Lambda->>Client: Return execution result ``` *Flushing buffer manually* ##### Flushing when logging an error ``` sequenceDiagram participant Client participant Lambda participant Logger participant CloudWatch Client->>Lambda: Invoke Lambda Lambda->>Logger: Initialize with DEBUG level buffering Logger-->>Lambda: Logger buffer ready Lambda->>Logger: Logger.LogDebug("First log") Logger-->>Logger: Buffer first debug log Lambda->>Logger: Logger.LogDebug("Second log") Logger-->>Logger: Buffer second debug log Lambda->>Logger: Logger.LogDebug("Third log") Logger-->>Logger: Buffer third debug log Lambda->>Lambda: Exception occurs Lambda->>Logger: Logger.LogError("Error details") Logger->>CloudWatch: Emit buffered debug logs Logger->>CloudWatch: Emit error log Lambda->>Client: Raise exception ``` *Flushing buffer when an error happens* ##### Flushing on error This works only when using the `Logger` decorator. You can configure the logger to automatically flush the buffer when an error occurs by setting the `FlushBufferOnUncaughtError` option to `true` in the decorator. ``` sequenceDiagram participant Client participant Lambda participant Logger participant CloudWatch Client->>Lambda: Invoke Lambda Lambda->>Logger: Using decorator Logger-->>Lambda: Logger context injected Lambda->>Logger: Logger.LogDebug("First log") Logger-->>Logger: Buffer first debug log Lambda->>Logger: Logger.LogDebug("Second log") Logger-->>Logger: Buffer second debug log Lambda->>Lambda: Uncaught Exception Lambda->>CloudWatch: Automatically emit buffered debug logs Lambda->>Client: Raise uncaught exception ``` *Flushing buffer when an uncaught exception happens* #### Buffering FAQs 1. **Does the buffer persist across Lambda invocations?** No, each Lambda invocation has its own buffer. The buffer is initialized when the Lambda function is invoked and is cleared after the function execution completes or when flushed manually. 1. **Are my logs buffered during cold starts?** No, we never buffer logs during cold starts. This is because we want to ensure that logs emitted during this phase are always available for debugging and monitoring purposes. The buffer is only used during the execution of the Lambda function. 1. **How can I prevent log buffering from consuming excessive memory?** You can limit the size of the buffer by setting the `MaxBytes` option in the `LogBufferingOptions` constructor parameter. This will ensure that the buffer does not grow indefinitely and consume excessive memory. 1. **What happens if the log buffer reaches its maximum size?** Older logs are removed from the buffer to make room for new logs. This means that if the buffer is full, you may lose some logs if they are not flushed before the buffer reaches its maximum size. When this happens, we emit a warning when flushing the buffer to indicate that some logs have been dropped. 1. **How is the log size of a log line calculated?** The log size is calculated based on the size of the serialized log line in bytes. This includes the size of the log message, the size of any additional keys, and the size of the timestamp. 1. **What timestamp is used when I flush the logs?** The timestamp preserves the original time when the log record was created. If you create a log record at 11:00:10 and flush it at 11:00:25, the log line will retain its original timestamp of 11:00:10. 1. **What happens if I try to add a log line that is bigger than max buffer size?** The log will be emitted directly to standard output and not buffered. When this happens, we emit a warning to indicate that the log line was too big to be buffered. 1. **What happens if Lambda times out without flushing the buffer?** Logs that are still in the buffer will be lost. If you are using the log buffer to log asynchronously, you should ensure that the buffer is flushed before the Lambda function times out. You can do this by calling the `Logger.FlushBuffer()` method at the end of your Lambda function. ### Timestamp formatting You can customize the timestamp format by setting the `TimestampFormat` property in the `Logger.Configure` method. The default format is `o`, which is the ISO 8601 format. You can use any valid [DateTime format string](https://docs.microsoft.com/en-us/dotnet/standard/base-types/custom-date-and-time-format-strings) to customize the timestamp format. For example, to use the `yyyy-MM-dd HH:mm:ss` format, you can do the following: ``` Logger.Configure(logger => { logger.TimestampFormat = "yyyy-MM-dd HH:mm:ss"; }); ``` This will output the timestamp in the following format: ``` { "level": "Information", "message": "Test Message", "timestamp": "2021-12-13 20:32:22", "service": "lambda-example", ... } ``` ## AOT Support Info If you want to use the `LogEvent`, `Custom Log Formatter` features, or serialize your own types when Logging events, you need to either pass `JsonSerializerContext` or make changes in your Lambda `Main` method. Info Starting from version 1.6.0, it is required to update the Amazon.Lambda.Serialization.SystemTextJson NuGet package to version 2.4.3 in your csproj. ### Using JsonSerializerOptions To be able to serializer your own types, you need to pass your `JsonSerializerContext` to the `TypeInfoResolver` of the `Logger.Configure` method. ``` Logger.Configure(logger => { logger.JsonOptions = new JsonSerializerOptions { TypeInfoResolver = YourJsonSerializerContext.Default }; }); ``` ### Using PowertoolsSourceGeneratorSerializer Replace `SourceGeneratorLambdaJsonSerializer` with `PowertoolsSourceGeneratorSerializer`. This change enables Powertools to construct an instance of `JsonSerializerOptions` used to customize the serialization and deserialization of Lambda JSON events and your own types. ``` Func> handler = FunctionHandler; await LambdaBootstrapBuilder.Create(handler, new SourceGeneratorLambdaJsonSerializer()) .Build() .RunAsync(); ``` ``` Func> handler = FunctionHandler; await LambdaBootstrapBuilder.Create(handler, new PowertoolsSourceGeneratorSerializer()) .Build() .RunAsync(); ``` For example when you have your own Demo type ``` public class Demo { public string Name { get; set; } public Headers Headers { get; set; } } ``` To be able to serialize it in AOT you have to have your own `JsonSerializerContext` ``` [JsonSerializable(typeof(APIGatewayHttpApiV2ProxyRequest))] [JsonSerializable(typeof(APIGatewayHttpApiV2ProxyResponse))] [JsonSerializable(typeof(Demo))] public partial class MyCustomJsonSerializerContext : JsonSerializerContext { } ``` When you update your code to use `PowertoolsSourceGeneratorSerializer`, we combine your `JsonSerializerContext` with Powertools' `JsonSerializerContext`. This allows Powertools to serialize your types and Lambda events. ### Custom Log Formatter To use a custom log formatter with AOT, pass an instance of `ILogFormatter` to `PowertoolsSourceGeneratorSerializer` instead of using the static `Logger.UseFormatter` in the Function constructor as you do in non-AOT Lambdas. ``` Func> handler = FunctionHandler; await LambdaBootstrapBuilder.Create(handler, new PowertoolsSourceGeneratorSerializer ( new CustomLogFormatter() ) ) .Build() .RunAsync(); ``` ``` public class CustomLogFormatter : ILogFormatter { public object FormatLogEntry(LogEntry logEntry) { return new { Message = logEntry.Message, Service = logEntry.Service, CorrelationIds = new { AwsRequestId = logEntry.LambdaContext?.AwsRequestId, XRayTraceId = logEntry.XRayTraceId, CorrelationId = logEntry.CorrelationId }, LambdaFunction = new { Name = logEntry.LambdaContext?.FunctionName, Arn = logEntry.LambdaContext?.InvokedFunctionArn, MemoryLimitInMB = logEntry.LambdaContext?.MemoryLimitInMB, Version = logEntry.LambdaContext?.FunctionVersion, ColdStart = logEntry.ColdStart, }, Level = logEntry.Level.ToString(), Timestamp = logEntry.Timestamp.ToString("o"), Logger = new { Name = logEntry.Name, SampleRate = logEntry.SamplingRate }, }; } } ``` ### Anonymous types Note While we support anonymous type serialization by converting to a `Dictionary`, this is **not** a best practice and is **not recommended** when using native AOT. We recommend using concrete classes and adding them to your `JsonSerializerContext`. ## Testing You can change where the `Logger` will output its logs by setting the `LogOutput` property. We also provide a helper class for tests `TestLoggerOutput` or you can provider your own implementation of `IConsoleWrapper`. ``` Logger.Configure(options => { // Using TestLoggerOutput options.LogOutput = new TestLoggerOutput(); // Custom console output for testing options.LogOutput = new TestConsoleWrapper(); }); // Example implementation for testing: public class TestConsoleWrapper : IConsoleWrapper { public List CapturedOutput { get; } = new(); public void WriteLine(string message) { CapturedOutput.Add(message); } } ``` ``` // Test example [Fact] public void When_Setting_Service_Should_Update_Key() { // Arrange var consoleOut = new TestLoggerOutput(); Logger.Configure(options => { options.LogOutput = consoleOut; }); // Act _testHandlers.HandlerService(); // Assert var st = consoleOut.ToString(); Assert.Contains("\"level\":\"Information\"", st); Assert.Contains("\"service\":\"test\"", st); Assert.Contains("\"name\":\"AWS.Lambda.Powertools.Logging.Logger\"", st); Assert.Contains("\"message\":\"test\"", st); } ``` ### ILogger If you are using ILogger interface you can inject the logger in a dedicated constructor for your Lambda function and thus you can mock your ILogger instance. ``` public class Function { private readonly ILogger _logger; public Function() { _logger = oggerFactory.Create(builder => { builder.AddPowertoolsLogger(config => { config.Service = "TestService"; config.LoggerOutputCase = LoggerOutputCase.PascalCase; }); }).CreatePowertoolsLogger(); } // constructor used for tests - pass the mock ILogger public Function(ILogger logger) { _logger = logger ?? loggerFactory.Create(builder => { builder.AddPowertoolsLogger(config => { config.Service = "TestService"; config.LoggerOutputCase = LoggerOutputCase.PascalCase; }); }).CreatePowertoolsLogger(); } public async Task FunctionHandler (APIGatewayProxyRequest apigProxyEvent, ILambdaContext context) { _logger.LogInformation("Collecting payment"); ... } } ``` Metrics creates custom metrics asynchronously by logging metrics to standard output following [Amazon CloudWatch Embedded Metric Format (EMF)](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch_Embedded_Metric_Format.html). These metrics can be visualized through [Amazon CloudWatch Console](https://aws.amazon.com/cloudwatch/). ## Key features - Aggregate up to 100 metrics using a single [CloudWatch EMF](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch_Embedded_Metric_Format_Specification.html) object (large JSON blob) - Validating your metrics against common metric definitions mistakes (for example, metric unit, values, max dimensions, max metrics) - Metrics are created asynchronously by the CloudWatch service. You do not need any custom stacks, and there is no impact to Lambda function latency - Context manager to create a one off metric with a different dimension - Ahead-of-Time compilation to native code support [AOT](https://docs.aws.amazon.com/lambda/latest/dg/dotnet-native-aot.html) from version 1.7.0 - Support for AspNetCore middleware and filters to capture metrics for HTTP requests ## Breaking changes from V1 Info Loooking for v1 specific documentation please go to [Metrics v1](/lambda/dotnet/core/metrics-v1) - **`Dimensions`** outputs as an array of arrays instead of an array of objects. Example: `Dimensions: [["service", "Environment"]]` instead of `Dimensions: ["service", "Environment"]` - **`FunctionName`** is not added as default dimension and only to cold start metric. - **`Default Dimensions`** can now be included in Cold Start metrics, this is a potential breaking change if you were relying on the absence of default dimensions in Cold Start metrics when searching. Metrics showcase - Metrics Explorer ## Installation Powertools for AWS Lambda (.NET) are available as NuGet packages. You can install the packages from [NuGet Gallery](https://www.nuget.org/packages?q=AWS+Lambda+Powertools*) or from Visual Studio editor by searching `AWS.Lambda.Powertools*` to see various utilities available. - [AWS.Lambda.Powertools.Metrics](https://www.nuget.org/packages?q=AWS.Lambda.Powertools.Metrics): `dotnet add package AWS.Lambda.Powertools.Metrics` ## Terminologies If you're new to Amazon CloudWatch, there are two terminologies you must be aware of before using this utility: - **Namespace**. It's the highest level container that will group multiple metrics from multiple services for a given application, for example `ServerlessEcommerce`. - **Dimensions**. Metrics metadata in key-value format. They help you slice and dice metrics visualization, for example `ColdStart` metric by Payment `service`. - **Metric**. It's the name of the metric, for example: SuccessfulBooking or UpdatedBooking. - **Unit**. It's a value representing the unit of measure for the corresponding metric, for example: Count or Seconds. - **Resolution**. It's a value representing the storage resolution for the corresponding metric. Metrics can be either Standard or High resolution. Read more [here](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/cloudwatch_concepts.html#Resolution_definition). Visit the AWS documentation for a complete explanation for [Amazon CloudWatch concepts](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/cloudwatch_concepts.html). Metric terminology, visually explained ## Getting started **`Metrics`** is implemented as a Singleton to keep track of your aggregate metrics in memory and make them accessible anywhere in your code. To guarantee that metrics are flushed properly the **`MetricsAttribute`** must be added on the lambda handler. Metrics has three global settings that will be used across all metrics emitted. Use your application or main service as the metric namespace to easily group all metrics: | Setting | Description | Environment variable | Decorator parameter | | --- | --- | --- | --- | | **Metric namespace** | Logical container where all metrics will be placed e.g. `MyCompanyEcommerce` | `POWERTOOLS_METRICS_NAMESPACE` | `Namespace` | | **Service** | Optionally, sets **Service** metric dimension across all metrics e.g. `payment` | `POWERTOOLS_SERVICE_NAME` | `Service` | | **Disable Powertools Metrics** | Optionally, disables all Powertools metrics | `POWERTOOLS_METRICS_DISABLED` | N/A | Info `POWERTOOLS_METRICS_DISABLED` will not disable default metrics created by AWS services. Autocomplete Metric Units All parameters in **`Metrics Attribute`** are optional. Following rules apply: - **Namespace:** **`Empty`** string by default. You can either specify it in code or environment variable. If not present before flushing metrics, a **`SchemaValidationException`** will be thrown. - **Service:** **`service_undefined`** by default. You can either specify it in code or environment variable. - **CaptureColdStart:** **`false`** by default. - **RaiseOnEmptyMetrics:** **`false`** by default. ### Metrics object #### Attribute The **`MetricsAttribute`** is a class-level attribute that can be used to set the namespace and service for all metrics emitted by the lambda handler. ``` using AWS.Lambda.Powertools.Metrics; [Metrics(Namespace = "ExampleApplication", Service = "Booking")] public async Task FunctionHandler(APIGatewayProxyRequest apigProxyEvent, ILambdaContext context) { ... } ``` #### Methods The **`Metrics`** class provides methods to add metrics, dimensions, and metadata to the metrics object. ``` using AWS.Lambda.Powertools.Metrics; public async Task FunctionHandler(APIGatewayProxyRequest apigProxyEvent, ILambdaContext context) { Metrics.AddMetric("SuccessfulBooking", 1, MetricUnit.Count); Metrics.AddDimension("Environment", "Prod"); Metrics.AddMetadata("BookingId", "683EEB2D-B2F3-4075-96EE-788E6E2EED45"); ... } ``` #### Initialization The **`Metrics`** object is initialized as a Singleton and can be accessed anywhere in your code. But can also be initialize with `Configure` or `Builder` patterns in your Lambda constructor, this the best option for testing. Configure: ``` using AWS.Lambda.Powertools.Metrics; public Function() { Metrics.Configure(options => { options.Namespace = "dotnet-powertools-test"; options.Service = "testService"; options.CaptureColdStart = true; options.DefaultDimensions = new Dictionary { { "Environment", "Prod" }, { "Another", "One" } }; }); } [Metrics] public async Task FunctionHandler(APIGatewayProxyRequest apigProxyEvent, ILambdaContext context) { Metrics.AddMetric("SuccessfulBooking", 1, MetricUnit.Count); ... } ``` Builder: ``` using AWS.Lambda.Powertools.Metrics; private readonly IMetrics _metrics; public Function() { _metrics = new MetricsBuilder() .WithCaptureColdStart(true) .WithService("testService") .WithNamespace("dotnet-powertools-test") .WithDefaultDimensions(new Dictionary { { "Environment", "Prod1" }, { "Another", "One" } }).Build(); } [Metrics] public async Task FunctionHandler(APIGatewayProxyRequest apigProxyEvent, ILambdaContext context) { _metrics.AddMetric("SuccessfulBooking", 1, MetricUnit.Count); ... } ``` ### Creating metrics You can create metrics using **`AddMetric`**, and you can create dimensions for all your aggregate metrics using **`AddDimension`** method. ``` using AWS.Lambda.Powertools.Metrics; public class Function { [Metrics(Namespace = "ExampleApplication", Service = "Booking")] public async Task FunctionHandler(APIGatewayProxyRequest apigProxyEvent, ILambdaContext context) { Metrics.AddMetric("SuccessfulBooking", 1, MetricUnit.Count); } } ``` ``` using AWS.Lambda.Powertools.Metrics; public class Function { [Metrics(Namespace = "ExampleApplication", Service = "Booking")] public async Task FunctionHandler(APIGatewayProxyRequest apigProxyEvent, ILambdaContext context) { Metrics.AddDimension("Environment","Prod"); Metrics.AddMetric("SuccessfulBooking", 1, MetricUnit.Count); } } ``` Autocomplete Metric Units `MetricUnit` enum facilitates finding a supported metric unit by CloudWatch. Metrics overflow CloudWatch EMF supports a max of 100 metrics per batch. Metrics utility will flush all metrics when adding the 100th metric. Subsequent metrics, e.g. 101th, will be aggregated into a new EMF object, for your convenience. Metric value must be a positive number Metric values must be a positive number otherwise an `ArgumentException` will be thrown. Do not create metrics or dimensions outside the handler Metrics or dimensions added in the global scope will only be added during cold start. Disregard if that's the intended behavior. ### Adding high-resolution metrics You can create [high-resolution metrics](https://aws.amazon.com/about-aws/whats-new/2023/02/amazon-cloudwatch-high-resolution-metric-extraction-structured-logs/) passing `MetricResolution` as parameter to `AddMetric`. When is it useful? High-resolution metrics are data with a granularity of one second and are very useful in several situations such as telemetry, time series, real-time incident management, and others. ``` using AWS.Lambda.Powertools.Metrics; public class Function { [Metrics(Namespace = "ExampleApplication", Service = "Booking")] public async Task FunctionHandler(APIGatewayProxyRequest apigProxyEvent, ILambdaContext context) { // Publish a metric with standard resolution i.e. StorageResolution = 60 Metrics.AddMetric("SuccessfulBooking", 1, MetricUnit.Count, MetricResolution.Standard); // Publish a metric with high resolution i.e. StorageResolution = 1 Metrics.AddMetric("FailedBooking", 1, MetricUnit.Count, MetricResolution.High); // The last parameter (storage resolution) is optional Metrics.AddMetric("SuccessfulUpgrade", 1, MetricUnit.Count); } } ``` Autocomplete Metric Resolutions Use the `MetricResolution` enum to easily find a supported metric resolution by CloudWatch. ### Adding default dimensions You can use **`SetDefaultDimensions`** method to persist dimensions across Lambda invocations. ``` using AWS.Lambda.Powertools.Metrics; public class Function { private Dictionary _defaultDimensions = new Dictionary{ {"Environment", "Prod"}, {"Another", "One"} }; [Metrics(Namespace = "ExampleApplication", Service = "Booking")] public async Task FunctionHandler(APIGatewayProxyRequest apigProxyEvent, ILambdaContext context) { Metrics.SetDefaultDimensions(_defaultDimensions); Metrics.AddMetric("SuccessfulBooking", 1, MetricUnit.Count); } } ``` ### Adding default dimensions with cold start metric You can use the Builder or Configure patterns in your Lambda class constructor to set default dimensions. ``` using AWS.Lambda.Powertools.Metrics; public class Function { private readonly IMetrics _metrics; public Function() { _metrics = new MetricsBuilder() .WithCaptureColdStart(true) .WithService("testService") .WithNamespace("dotnet-powertools-test") .WithDefaultDimensions(new Dictionary { { "Environment", "Prod1" }, { "Another", "One" } }).Build(); } [Metrics] public async Task FunctionHandler(APIGatewayProxyRequest apigProxyEvent, ILambdaContext context) { _metrics.AddMetric("SuccessfulBooking", 1, MetricUnit.Count); ... } ``` ``` using AWS.Lambda.Powertools.Metrics; public class Function { public Function() { Metrics.Configure(options => { options.Namespace = "dotnet-powertools-test"; options.Service = "testService"; options.CaptureColdStart = true; options.DefaultDimensions = new Dictionary { { "Environment", "Prod" }, { "Another", "One" } }; }); } [Metrics] public async Task FunctionHandler(APIGatewayProxyRequest apigProxyEvent, ILambdaContext context) { Metrics.AddMetric("SuccessfulBooking", 1, MetricUnit.Count); ... } ``` ### Adding dimensions You can add dimensions to your metrics using **`AddDimension`** method. ``` using AWS.Lambda.Powertools.Metrics; public class Function { [Metrics(Namespace = "ExampleApplication", Service = "Booking")] public async Task FunctionHandler(APIGatewayProxyRequest apigProxyEvent, ILambdaContext context) { Metrics.AddDimension("Environment","Prod"); Metrics.AddMetric("SuccessfulBooking", 1, MetricUnit.Count); } } ``` ``` { "SuccessfulBooking": 1.0, "_aws": { "Timestamp": 1592234975665, "CloudWatchMetrics": [ { "Namespace": "ExampleApplication", "Dimensions": [ [ "service", "Environment" ] ], "Metrics": [ { "Name": "SuccessfulBooking", "Unit": "Count" } ] } ] }, "service": "ExampleService", "Environment": "Prod" } ``` ### Flushing metrics With **`MetricsAttribute`** all your metrics are validated, serialized and flushed to standard output when lambda handler completes execution or when you had the 100th metric to memory. You can also flush metrics manually by calling **`Flush`** method. During metrics validation, if no metrics are provided then a warning will be logged, but no exception will be raised. ``` using AWS.Lambda.Powertools.Metrics; public class Function { [Metrics(Namespace = "ExampleApplication", Service = "Booking")] public async Task FunctionHandler(APIGatewayProxyRequest apigProxyEvent, ILambdaContext context) { Metrics.AddMetric("SuccessfulBooking", 1, MetricUnit.Count); Metrics.Flush(); } } ``` ``` { "BookingConfirmation": 1.0, "_aws": { "Timestamp": 1592234975665, "CloudWatchMetrics": [ { "Namespace": "ExampleApplication", "Dimensions": [ [ "service" ] ], "Metrics": [ { "Name": "BookingConfirmation", "Unit": "Count" } ] } ] }, "service": "ExampleService" } ``` Metric validation If metrics are provided, and any of the following criteria are not met, **`SchemaValidationException`** will be raised: - Maximum of 30 dimensions - Namespace is set - Metric units must be [supported by CloudWatch](https://docs.aws.amazon.com/AmazonCloudWatch/latest/APIReference/API_MetricDatum.html) We do not emit 0 as a value for ColdStart metric for cost reasons. [Let us know](https://github.com/aws-powertools/powertools-lambda-dotnet/issues/new?assignees=&labels=feature-request%2Ctriage&template=feature_request.yml&title=Feature+request%3A+TITLE) if you'd prefer a flag to override it ### Raising SchemaValidationException on empty metrics If you want to ensure that at least one metric is emitted, you can pass **`RaiseOnEmptyMetrics`** to the Metrics attribute: ``` using AWS.Lambda.Powertools.Metrics; public class Function { [Metrics(RaiseOnEmptyMetrics = true)] public async Task FunctionHandler(APIGatewayProxyRequest apigProxyEvent, ILambdaContext context) { ... ``` ### Capturing cold start metric You can optionally capture cold start metrics by setting **`CaptureColdStart`** parameter to `true`. ``` using AWS.Lambda.Powertools.Metrics; public class Function { [Metrics(CaptureColdStart = true)] public async Task FunctionHandler(APIGatewayProxyRequest apigProxyEvent, ILambdaContext context) { ... ``` ``` using AWS.Lambda.Powertools.Metrics; public class Function { private readonly IMetrics _metrics; public Function() { _metrics = new MetricsBuilder() .WithCaptureColdStart(true) .WithService("testService") .WithNamespace("dotnet-powertools-test") } [Metrics] public async Task FunctionHandler(APIGatewayProxyRequest apigProxyEvent, ILambdaContext context) { _metrics.AddMetric("SuccessfulBooking", 1, MetricUnit.Count); ... } ``` ``` using AWS.Lambda.Powertools.Metrics; public class Function { public Function() { Metrics.Configure(options => { options.Namespace = "dotnet-powertools-test"; options.Service = "testService"; options.CaptureColdStart = true; }); } [Metrics] public async Task FunctionHandler(APIGatewayProxyRequest apigProxyEvent, ILambdaContext context) { Metrics.AddMetric("SuccessfulBooking", 1, MetricUnit.Count); ... } ``` If it's a cold start invocation, this feature will: - Create a separate EMF blob solely containing a metric named `ColdStart` - Add `FunctionName` and `Service` dimensions This has the advantage of keeping cold start metric separate from your application metrics, where you might have unrelated dimensions. ## Advanced ### Adding metadata You can add high-cardinality data as part of your Metrics log with `AddMetadata` method. This is useful when you want to search highly contextual information along with your metrics in your logs. Info **This will not be available during metrics visualization** - Use **dimensions** for this purpose Info Adding metadata with a key that is the same as an existing metric will be ignored ``` using AWS.Lambda.Powertools.Metrics; public class Function { [Metrics(Namespace = ExampleApplication, Service = "Booking")] public async Task FunctionHandler(APIGatewayProxyRequest apigProxyEvent, ILambdaContext context) { Metrics.AddMetric("SuccessfulBooking", 1, MetricUnit.Count); Metrics.AddMetadata("BookingId", "683EEB2D-B2F3-4075-96EE-788E6E2EED45"); ... ``` ``` { "SuccessfulBooking": 1.0, "_aws": { "Timestamp": 1592234975665, "CloudWatchMetrics": [ { "Namespace": "ExampleApplication", "Dimensions": [ [ "service" ] ], "Metrics": [ { "Name": "SuccessfulBooking", "Unit": "Count" } ] } ] }, "Service": "Booking", "BookingId": "683EEB2D-B2F3-4075-96EE-788E6E2EED45" } ``` ### Single metric with a different dimension CloudWatch EMF uses the same dimensions across all your metrics. Use **`PushSingleMetric`** if you have a metric that should have different dimensions. Info Generally, this would be an edge case since you [pay for unique metric](https://aws.amazon.com/cloudwatch/pricing). Keep the following formula in mind: **unique metric = (metric_name + dimension_name + dimension_value)** ``` using AWS.Lambda.Powertools.Metrics; public class Function { [Metrics(Namespace = ExampleApplication, Service = "Booking")] public async Task FunctionHandler(APIGatewayProxyRequest apigProxyEvent, ILambdaContext context) { Metrics.PushSingleMetric( name: "ColdStart", value: 1, unit: MetricUnit.Count, nameSpace: "ExampleApplication", service: "Booking"); ... ``` By default it will skip all previously defined dimensions including default dimensions. Use `dimensions` argument if you want to reuse default dimensions or specify custom dimensions from a dictionary. - `Metrics.DefaultDimensions`: Reuse default dimensions when using static Metrics - `Options.DefaultDimensions`: Reuse default dimensions when using Builder or Configure patterns ``` using AWS.Lambda.Powertools.Metrics; public class Function { [Metrics(Namespace = ExampleApplication, Service = "Booking")] public async Task FunctionHandler(APIGatewayProxyRequest apigProxyEvent, ILambdaContext context) { Metrics.PushSingleMetric( name: "ColdStart", value: 1, unit: MetricUnit.Count, nameSpace: "ExampleApplication", service: "Booking", dimensions: new Dictionary { {"FunctionContext", "$LATEST"} }); ... ``` ``` using AWS.Lambda.Powertools.Metrics; public class Function { [Metrics(Namespace = ExampleApplication, Service = "Booking")] public async Task FunctionHandler(APIGatewayProxyRequest apigProxyEvent, ILambdaContext context) { Metrics.SetDefaultDimensions(new Dictionary { { "Default", "SingleMetric" } }); Metrics.PushSingleMetric("SingleMetric", 1, MetricUnit.Count, dimensions: Metrics.DefaultDimensions ); ... ``` ``` using AWS.Lambda.Powertools.Metrics; public MetricsnBuilderHandler(IMetrics metrics = null) { _metrics = metrics ?? new MetricsBuilder() .WithCaptureColdStart(true) .WithService("testService") .WithNamespace("dotnet-powertools-test") .WithDefaultDimensions(new Dictionary { { "Environment", "Prod1" }, { "Another", "One" } }).Build(); } public void HandlerSingleMetricDimensions() { _metrics.PushSingleMetric("SuccessfulBooking", 1, MetricUnit.Count, dimensions: _metrics.Options.DefaultDimensions); } ... ``` ### Cold start Function Name dimension In cases where you want to customize the `FunctionName` dimension in Cold Start metrics. This is useful where you want to maintain the same name in case of auto generated handler names (cdk, top-level statement functions, etc.) Example: ``` using AWS.Lambda.Powertools.Metrics; public class Function { [Metrics(FunctionName = "MyFunctionName", Namespace = "ExampleApplication", Service = "Booking")] public async Task FunctionHandler(APIGatewayProxyRequest apigProxyEvent, ILambdaContext context) { Metrics.AddMetric("SuccessfulBooking", 1, MetricUnit.Count); ... } ``` ``` using AWS.Lambda.Powertools.Metrics; public class Function { public Function() { Metrics.Configure(options => { options.Namespace = "dotnet-powertools-test"; options.Service = "testService"; options.CaptureColdStart = true; options.FunctionName = "MyFunctionName"; }); } [Metrics] public async Task FunctionHandler(APIGatewayProxyRequest apigProxyEvent, ILambdaContext context) { Metrics.AddMetric("SuccessfulBooking", 1, MetricUnit.Count); ... } ``` ## AspNetCore ### Installation To use the Metrics middleware in an ASP.NET Core application, you need to install the `AWS.Lambda.Powertools.Metrics.AspNetCore` NuGet package. ``` dotnet add package AWS.Lambda.Powertools.Metrics.AspNetCore ``` ### UseMetrics() Middleware The `UseMetrics` middleware is an extension method for the `IApplicationBuilder` interface. It adds a metrics middleware to the specified application builder, which captures cold start metrics (if enabled) and flushes metrics on function exit. #### Example ``` using AWS.Lambda.Powertools.Metrics.AspNetCore.Http; var builder = WebApplication.CreateBuilder(args); // Configure metrics builder.Services.AddSingleton(_ => new MetricsBuilder() .WithNamespace("MyApi") // Namespace for the metrics .WithService("WeatherService") // Service name for the metrics .WithCaptureColdStart(true) // Capture cold start metrics .WithDefaultDimensions(new Dictionary // Default dimensions for the metrics { {"Environment", "Prod"}, {"Another", "One"} }) .Build()); // Build the metrics builder.Services.AddAWSLambdaHosting(LambdaEventSource.RestApi); var app = builder.Build(); app.UseMetrics(); // Add the metrics middleware app.MapGet("/powertools", (IMetrics metrics) => { // add custom metrics metrics.AddMetric("MyCustomMetric", 1, MetricUnit.Count); // flush metrics - this is required to ensure metrics are sent to CloudWatch metrics.Flush(); }); app.Run(); ``` Here is the highlighted `UseMetrics` method: ``` /// /// Adds a metrics middleware to the specified application builder. /// This will capture cold start (if CaptureColdStart is enabled) metrics and flush metrics on function exit. /// /// The application builder to add the metrics middleware to. /// The application builder with the metrics middleware added. public static IApplicationBuilder UseMetrics(this IApplicationBuilder app) { app.UseMiddleware(); return app; } ``` Explanation: - The method is defined as an extension method for the `IApplicationBuilder` interface. - It adds a `MetricsMiddleware` to the application builder using the `UseMiddleware` method. - The `MetricsMiddleware` captures and records metrics for HTTP requests, including cold start metrics if the `CaptureColdStart` option is enabled. ### WithMetrics() filter The `WithMetrics` method is an extension method for the `RouteHandlerBuilder` class. It adds a metrics filter to the specified route handler builder, which captures cold start metrics (if enabled) and flushes metrics on function exit. #### Example ``` using AWS.Lambda.Powertools.Metrics; using AWS.Lambda.Powertools.Metrics.AspNetCore.Http; var builder = WebApplication.CreateBuilder(args); // Configure metrics builder.Services.AddSingleton(_ => new MetricsBuilder() .WithNamespace("MyApi") // Namespace for the metrics .WithService("WeatherService") // Service name for the metrics .WithCaptureColdStart(true) // Capture cold start metrics .WithDefaultDimensions(new Dictionary // Default dimensions for the metrics { {"Environment", "Prod"}, {"Another", "One"} }) .Build()); // Build the metrics // Add AWS Lambda support. When the application is run in Lambda, Kestrel is swapped out as the web server with Amazon.Lambda.AspNetCoreServer. This // package will act as the web server translating requests and responses between the Lambda event source and ASP.NET Core. builder.Services.AddAWSLambdaHosting(LambdaEventSource.RestApi); var app = builder.Build(); app.MapGet("/powertools", (IMetrics metrics) => { // add custom metrics metrics.AddMetric("MyCustomMetric", 1, MetricUnit.Count); // flush metrics - this is required to ensure metrics are sent to CloudWatch metrics.Flush(); }) .WithMetrics(); app.Run(); ``` Here is the highlighted `WithMetrics` method: ``` /// /// Adds a metrics filter to the specified route handler builder. /// This will capture cold start (if CaptureColdStart is enabled) metrics and flush metrics on function exit. /// /// The route handler builder to add the metrics filter to. /// The route handler builder with the metrics filter added. public static RouteHandlerBuilder WithMetrics(this RouteHandlerBuilder builder) { builder.AddEndpointFilter(); return builder; } ``` Explanation: - The method is defined as an extension method for the `RouteHandlerBuilder` class. - It adds a `MetricsFilter` to the route handler builder using the `AddEndpointFilter` method. - The `MetricsFilter` captures and records metrics for HTTP endpoints, including cold start metrics if the `CaptureColdStart` option is enabled. - The method returns the modified `RouteHandlerBuilder` instance with the metrics filter added. ## Testing your code ### Unit testing To test your code that uses the Metrics utility, you can use the `TestLambdaContext` class from the `Amazon.Lambda.TestUtilities` package. You can also use the `IMetrics` interface to mock the Metrics utility in your tests. Here is an example of how you can test a Lambda function that uses the Metrics utility: #### Lambda Function ``` using System.Collections.Generic; using Amazon.Lambda.Core; public class MetricsnBuilderHandler { private readonly IMetrics _metrics; // Allow injection of IMetrics for testing public MetricsnBuilderHandler(IMetrics metrics = null) { _metrics = metrics ?? new MetricsBuilder() .WithCaptureColdStart(true) .WithService("testService") .WithNamespace("dotnet-powertools-test") .WithDefaultDimensions(new Dictionary { { "Environment", "Prod1" }, { "Another", "One" } }).Build(); } [Metrics] public void Handler(ILambdaContext context) { _metrics.AddMetric("SuccessfulBooking", 1, MetricUnit.Count); } } ``` #### Unit Tests ``` [Fact] public void Handler_With_Builder_Should_Configure_In_Constructor() { // Arrange var handler = new MetricsnBuilderHandler(); // Act handler.Handler(new TestLambdaContext { FunctionName = "My_Function_Name" }); // Get the output and parse it var metricsOutput = _consoleOut.ToString(); // Assert cold start Assert.Contains( "\"CloudWatchMetrics\":[{\"Namespace\":\"dotnet-powertools-test\",\"Metrics\":[{\"Name\":\"ColdStart\",\"Unit\":\"Count\"}],\"Dimensions\":[[\"Service\",\"Environment\",\"Another\",\"FunctionName\"]]}]},\"Service\":\"testService\",\"Environment\":\"Prod1\",\"Another\":\"One\",\"FunctionName\":\"My_Function_Name\",\"ColdStart\":1}", metricsOutput); // Assert successful Memory metrics Assert.Contains( "\"CloudWatchMetrics\":[{\"Namespace\":\"dotnet-powertools-test\",\"Metrics\":[{\"Name\":\"SuccessfulBooking\",\"Unit\":\"Count\"}],\"Dimensions\":[[\"Service\",\"Environment\",\"Another\",\"FunctionName\"]]}]},\"Service\":\"testService\",\"Environment\":\"Prod1\",\"Another\":\"One\",\"FunctionName\":\"My_Function_Name\",\"SuccessfulBooking\":1}", metricsOutput); } [Fact] public void Handler_With_Builder_Should_Configure_In_Constructor_Mock() { var metricsMock = Substitute.For(); metricsMock.Options.Returns(new MetricsOptions { CaptureColdStart = true, Namespace = "dotnet-powertools-test", Service = "testService", DefaultDimensions = new Dictionary { { "Environment", "Prod" }, { "Another", "One" } } }); Metrics.UseMetricsForTests(metricsMock); var sut = new MetricsnBuilderHandler(metricsMock); // Act sut.Handler(new TestLambdaContext { FunctionName = "My_Function_Name" }); metricsMock.Received(1).PushSingleMetric("ColdStart", 1, MetricUnit.Count, "dotnet-powertools-test", service: "testService", Arg.Any>()); metricsMock.Received(1).AddMetric("SuccessfulBooking", 1, MetricUnit.Count); } ``` ### Environment variables Tip Ignore this section, if: - You are explicitly setting namespace/default dimension via `namespace` and `service` parameters - You're not instantiating `Metrics` in the global namespace For example, `Metrics(namespace="ExampleApplication", service="booking")` Make sure to set `POWERTOOLS_METRICS_NAMESPACE` and `POWERTOOLS_SERVICE_NAME` before running your tests to prevent failing on `SchemaValidation` exception. You can set it before you run tests by adding the environment variable. ``` Environment.SetEnvironmentVariable("POWERTOOLS_METRICS_NAMESPACE","AWSLambdaPowertools"); ``` Powertools for AWS Lambda (.NET) tracing is an opinionated thin wrapper for [AWS X-Ray .NET SDK](https://github.com/aws/aws-xray-sdk-dotnet/) a provides functionality to reduce the overhead of performing common tracing tasks. ## Key Features - Helper methods to improve the developer experience for creating [custom AWS X-Ray subsegments](https://docs.aws.amazon.com/xray/latest/devguide/xray-sdk-dotnet-subsegments.html). - Capture cold start as annotation. - Capture function responses and full exceptions as metadata. - Better experience when developing with multiple threads. - Auto-patch supported modules by AWS X-Ray - Auto-disable when not running in AWS Lambda environment - Ahead-of-Time compilation to native code support [AOT](https://docs.aws.amazon.com/lambda/latest/dg/dotnet-native-aot.html) from version 1.5.0 ## Installation Powertools for AWS Lambda (.NET) are available as NuGet packages. You can install the packages from [NuGet Gallery](https://www.nuget.org/packages?q=AWS+Lambda+Powertools*) or from Visual Studio editor by searching `AWS.Lambda.Powertools*` to see various utilities available. - [AWS.Lambda.Powertools.Tracing](https://www.nuget.org/packages?q=AWS.Lambda.Powertools.Tracing): `dotnet nuget add AWS.Lambda.Powertools.Tracing` ## Getting Started Tracer relies on AWS X-Ray SDK over [OpenTelememetry Distro (ADOT)](https://aws-otel.github.io/docs/getting-started/lambda) for optimal cold start (lower latency). Before you use this utility, your AWS Lambda function [must have permissions](https://docs.aws.amazon.com/lambda/latest/dg/services-xray.html#services-xray-permissions) to send traces to AWS X-Ray. To enable active tracing on an AWS Serverless Application Model (AWS SAM) AWS::Serverless::Function resource, use the `Tracing` property. You can use the Globals section of the AWS SAM template to set this for all ### Using AWS Serverless Application Model (AWS SAM) ``` Resources: HelloWorldFunction: Type: AWS::Serverless::Function Properties: ... Runtime: dotnet6.0 Tracing: Active Environment: Variables: POWERTOOLS_SERVICE_NAME: example ``` The Powertools for AWS Lambda (.NET) service name is used as the X-Ray namespace. This can be set using the environment variable `POWERTOOLS_SERVICE_NAME` ## Full list of environment variables | Environment variable | Description | Default | | --- | --- | --- | | **POWERTOOLS_SERVICE_NAME** | Sets service name used for tracing namespace, metrics dimension and structured logging | `"service_undefined"` | | **POWERTOOLS_TRACE_DISABLED** | Disables tracing | `false` | | **POWERTOOLS_TRACER_CAPTURE_RESPONSE** | Captures Lambda or method return as metadata. | `true` | | **POWERTOOLS_TRACER_CAPTURE_ERROR** | Captures Lambda or method exception as metadata. | `true` | ### Lambda handler To enable Powertools for AWS Lambda (.NET) tracing to your function add the `[Tracing]` attribute to your `FunctionHandler` method or on any method will capture the method as a separate subsegment automatically. You can optionally choose to customize segment name that appears in traces. ``` public class Function { [Tracing] public async Task FunctionHandler (APIGatewayProxyRequest apigProxyEvent, ILambdaContext context) { await BusinessLogic1() .ConfigureAwait(false); await BusinessLogic2() .ConfigureAwait(false); } [Tracing] private async Task BusinessLogic1() { } [Tracing] private async Task BusinessLogic2() { } } ``` ``` public class Function { [Tracing(SegmentName = "YourCustomName")] public async Task FunctionHandler (APIGatewayProxyRequest apigProxyEvent, ILambdaContext context) { ... } } ``` By default, this attribute will automatically record method responses and exceptions. You can change the default behavior by setting the environment variables `POWERTOOLS_TRACER_CAPTURE_RESPONSE` and `POWERTOOLS_TRACER_CAPTURE_ERROR` as needed. Optionally, you can override behavior by different supported `CaptureMode` to record response, exception or both. Returning sensitive information from your Lambda handler or functions, where `Tracing` is used? You can disable attribute from capturing their responses and exception as tracing metadata with **`captureMode=DISABLED`** or globally by setting environment variables **`POWERTOOLS_TRACER_CAPTURE_RESPONSE`** and **`POWERTOOLS_TRACER_CAPTURE_ERROR`** to **`false`** ``` public class Function { [Tracing(CaptureMode = TracingCaptureMode.Disabled)] public async Task FunctionHandler (APIGatewayProxyRequest apigProxyEvent, ILambdaContext context) { ... } } ``` ``` Resources: HelloWorldFunction: Type: AWS::Serverless::Function Properties: ... Runtime: dotnetcore3.1 Tracing: Active Environment: Variables: POWERTOOLS_TRACER_CAPTURE_RESPONSE: false POWERTOOLS_TRACER_CAPTURE_ERROR: false ``` ### Annotations & Metadata **Annotations** are key-values associated with traces and indexed by AWS X-Ray. You can use them to filter traces and to create [Trace Groups](https://aws.amazon.com/about-aws/whats-new/2018/11/aws-xray-adds-the-ability-to-group-traces/) to slice and dice your transactions. **Metadata** are key-values also associated with traces but not indexed by AWS X-Ray. You can use them to add additional context for an operation using any native object. You can add annotations using `AddAnnotation()` method from Tracing ``` using AWS.Lambda.Powertools.Tracing; public class Function { [Tracing] public async Task FunctionHandler (APIGatewayProxyRequest apigProxyEvent, ILambdaContext context) { Tracing.AddAnnotation("annotation", "value"); } } ``` You can add metadata using `AddMetadata()` method from Tracing ``` using AWS.Lambda.Powertools.Tracing; public class Function { [Tracing] public async Task FunctionHandler (APIGatewayProxyRequest apigProxyEvent, ILambdaContext context) { Tracing.AddMetadata("content", "value"); } } ``` ## Utilities Tracing modules comes with certain utility method when you don't want to use attribute for capturing a code block under a subsegment, or you are doing multithreaded programming. Refer examples below. ``` using AWS.Lambda.Powertools.Tracing; public class Function { public async Task FunctionHandler (APIGatewayProxyRequest apigProxyEvent, ILambdaContext context) { Tracing.WithSubsegment("loggingResponse", (subsegment) => { // Some business logic }); Tracing.WithSubsegment("localNamespace", "loggingResponse", (subsegment) => { // Some business logic }); } } ``` ``` using AWS.Lambda.Powertools.Tracing; public class Function { public async Task FunctionHandler (APIGatewayProxyRequest apigProxyEvent, ILambdaContext context) { // Extract existing trace data var entity = Tracing.GetEntity(); var task = Task.Run(() => { Tracing.WithSubsegment("InlineLog", entity, (subsegment) => { // Business logic in separate task }); }); } } ``` ## Instrumenting SDK clients You should make sure to instrument the SDK clients explicitly based on the function dependency. You can instrument all of your AWS SDK for .NET clients by calling RegisterForAllServices before you create them. ``` using Amazon.DynamoDBv2; using Amazon.DynamoDBv2.Model; using AWS.Lambda.Powertools.Tracing; public class Function { private static IAmazonDynamoDB _dynamoDb; /// /// Function constructor /// public Function() { Tracing.RegisterForAllServices(); _dynamoDb = new AmazonDynamoDBClient(); } } ``` To instrument clients for some services and not others, call Register instead of RegisterForAllServices. Replace the highlighted text with the name of the service's client interface. ``` Tracing.Register() ``` This functionality is a thin wrapper for AWS X-Ray .NET SDK. Refer details on [how to instrument SDK client with Xray](https://docs.aws.amazon.com/xray/latest/devguide/xray-sdk-dotnet-sdkclients.html) ## Instrumenting outgoing HTTP calls ``` using Amazon.XRay.Recorder.Handlers.System.Net; public class Function { public Function() { var httpClient = new HttpClient(new HttpClientXRayTracingHandler(new HttpClientHandler())); var myIp = await httpClient.GetStringAsync("https://checkip.amazonaws.com/"); } } ``` More information about instrumenting [outgoing http calls](https://docs.aws.amazon.com/xray/latest/devguide/xray-sdk-dotnet-httpclients.html). ## AOT Support Native AOT trims your application code as part of the compilation to ensure that the binary is as small as possible. .NET 8 for Lambda provides improved trimming support compared to previous versions of .NET. ### WithTracing() To use Tracing utility with AOT support you first need to add `WithTracing()` to the source generator you are using either the default `SourceGeneratorLambdaJsonSerializer` or the Powertools Logging utility [source generator](../logging/#aot-support) `PowertoolsSourceGeneratorSerializer`. Examples: ``` using AWS.Lambda.Powertools.Tracing; using AWS.Lambda.Powertools.Tracing.Serializers; private static async Task Main() { Func handler = FunctionHandler; await LambdaBootstrapBuilder.Create(handler, new SourceGeneratorLambdaJsonSerializer() .WithTracing()) .Build() .RunAsync(); } ``` ``` using AWS.Lambda.Powertools.Logging; using AWS.Lambda.Powertools.Logging.Serializers; using AWS.Lambda.Powertools.Tracing; using AWS.Lambda.Powertools.Tracing.Serializers; private static async Task Main() { Func handler = FunctionHandler; await LambdaBootstrapBuilder.Create(handler, new PowertoolsSourceGeneratorSerializer() .WithTracing()) .Build() .RunAsync(); } ``` ### Publishing Publishing Make sure you are publishing your code with `--self-contained true` and that you have `partial` in your `.csproj` file ### Trimming Trim warnings ``` ``` Note that when you receive a trim warning, adding the class that generates the warning to TrimmerRootAssembly might not resolve the issue. A trim warning indicates that the class is trying to access some other class that can't be determined until runtime. To avoid runtime errors, add this second class to TrimmerRootAssembly. To learn more about managing trim warnings, see [Introduction to trim warnings](https://learn.microsoft.com/en-us/dotnet/core/deploying/trimming/fixing-warnings) in the Microsoft .NET documentation. ### Not supported Not supported Currently instrumenting SDK clients with `Tracing.RegisterForAllServices()` is not supported on AOT mode. # Utilities Event Handler for AWS AppSync real-time events. ``` stateDiagram-v2 direction LR EventSource: AppSync Events EventHandlerResolvers: Publish & Subscribe events LambdaInit: Lambda invocation EventHandler: Event Handler EventHandlerResolver: Route event based on namespace/channel YourLogic: Run your registered handler function EventHandlerResolverBuilder: Adapts response to AppSync contract LambdaResponse: Lambda response state EventSource { EventHandlerResolvers } EventHandlerResolvers --> LambdaInit LambdaInit --> EventHandler EventHandler --> EventHandlerResolver state EventHandler { [*] --> EventHandlerResolver: app.resolve(event, context) EventHandlerResolver --> YourLogic YourLogic --> EventHandlerResolverBuilder } EventHandler --> LambdaResponse ``` ## Key Features - Easily handle publish and subscribe events with dedicated handler methods - Automatic routing based on namespace and channel patterns - Support for wildcard patterns to create catch-all handlers - Process events in parallel or sequentially - Control over event aggregation for batch processing - Graceful error handling for individual events ## Terminology **[AWS AppSync Events](https://docs.aws.amazon.com/appsync/latest/eventapi/event-api-welcome.html)**. A service that enables you to quickly build secure, scalable real-time WebSocket APIs without managing infrastructure or writing API code. It handles connection management, message broadcasting, authentication, and monitoring, reducing time to market and operational costs. ## Getting started Tip: New to AppSync Real-time API? Visit [AWS AppSync Real-time documentation](https://docs.aws.amazon.com/appsync/latest/eventapi/event-api-getting-started.html) to understand how to set up subscriptions and pub/sub messaging. ### Required resources You must have an existing AppSync Events API with real-time capabilities enabled and IAM permissions to invoke your Lambda function. ``` Resources: WebsocketAPI: Type: AWS::AppSync::Api Properties: EventConfig: AuthProviders: - AuthType: API_KEY ConnectionAuthModes: - AuthType: API_KEY DefaultPublishAuthModes: - AuthType: API_KEY DefaultSubscribeAuthModes: - AuthType: API_KEY Name: RealTimeEventAPI WebasocketApiKey: Type: AWS::AppSync::ApiKey Properties: ApiId: !GetAtt WebsocketAPI.ApiId Description: "API KEY" Expires: 365 WebsocketAPINamespace: Type: AWS::AppSync::ChannelNamespace Properties: ApiId: !GetAtt WebsocketAPI.ApiId Name: powertools ``` ### AppSync request and response format AppSync Events uses a specific event format for Lambda requests and responses. In most scenarios, Powertools for AWS simplifies this interaction by automatically formatting resolver returns to match the expected AppSync response structure. ``` { "identity":"None", "result":"None", "request":{ "headers": { "x-forwarded-for": "1.1.1.1, 2.2.2.2", "cloudfront-viewer-country": "US", "cloudfront-is-tablet-viewer": "false", "via": "2.0 xxxxxxxxxxxxxxxx.cloudfront.net (CloudFront)", "cloudfront-forwarded-proto": "https", "origin": "https://us-west-1.console.aws.amazon.com", "content-length": "217", "accept-language": "en-US,en;q=0.9", "host": "xxxxxxxxxxxxxxxx.appsync-api.us-west-1.amazonaws.com", "x-forwarded-proto": "https", "user-agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/85.0.4183.83 Safari/537.36", "accept": "*/*", "cloudfront-is-mobile-viewer": "false", "cloudfront-is-smarttv-viewer": "false", "accept-encoding": "gzip, deflate, br", "referer": "https://us-west-1.console.aws.amazon.com/appsync/home?region=us-west-1", "content-type": "application/json", "sec-fetch-mode": "cors", "x-amz-cf-id": "3aykhqlUwQeANU-HGY7E_guV5EkNeMMtwyOgiA==", "x-amzn-trace-id": "Root=1-5f512f51-fac632066c5e848ae714", "authorization": "eyJraWQiOiJScWFCSlJqYVJlM0hrSnBTUFpIcVRXazNOW...", "sec-fetch-dest": "empty", "x-amz-user-agent": "AWS-Console-AppSync/", "cloudfront-is-desktop-viewer": "true", "sec-fetch-site": "cross-site", "x-forwarded-port": "443" }, "domainName":"None" }, "info":{ "channel":{ "path":"/default/channel", "segments":[ "default", "channel" ] }, "channelNamespace":{ "name":"default" }, "operation":"PUBLISH" }, "error":"None", "prev":"None", "stash":{ }, "outErrors":[ ], "events":[ { "payload":{ "data":"data_1" }, "id":"1" }, { "payload":{ "data":"data_2" }, "id":"2" } ] } ``` ``` { "events":[ { "payload":{ "data":"data_1" }, "id":"1" }, { "payload":{ "data":"data_2" }, "id":"2" } ] } ``` ``` { "events":[ { "error": "Error message", "id":"1" }, { "payload":{ "data":"data_2" }, "id":"2" } ] } ``` #### Events response with error When processing events with Lambda, you can return errors to AppSync in three ways: - **Item specific error:** Return an `error` key within each individual item's response. AppSync Events expects this format for item-specific errors. - **Fail entire request:** Return a JSON object with a top-level `error` key. This signals a general failure, and AppSync treats the entire request as unsuccessful. - **Unauthorized exception**: Raise the **UnauthorizedException** exception to reject a subscribe or publish request with HTTP 403. ### Resolver Important When you return `Resolve` or `ResolveAsync` from your handler it will automatically parse the incoming event data and invokes the appropriate handler based on the namespace/channel pattern you register. You can define your handlers for different event types using the `OnPublish()`, `OnPublishAggregate()`, and `OnSubscribe()` methods and their `Async` versions `OnPublishAsync()`, `OnPublishAggregateAsync()`, and `OnSubscribeAsync()`. ``` using AWS.Lambda.Powertools.EventHandler.AppSyncEvents; public class Function { AppSyncEventsResolver _app; public Function() { _app = new AppSyncEventsResolver(); _app.OnPublishAsync("/default/channel", async (payload) => { // Handle events or // return unchanged payload return payload; }); } public async Task FunctionHandler(AppSyncEventsRequest input, ILambdaContext context) { return await _app.ResolveAsync(input, context); } } ``` ``` using AWS.Lambda.Powertools.EventHandler.AppSyncEvents; var app = new AppSyncEventsResolver(); app.OnPublishAsync("/default/channel", async (payload) => { // Handle events or // return unchanged payload return payload; } async Task Handler(AppSyncEventsRequest appSyncEvent, ILambdaContext context) { return await app.ResolveAsync(appSyncEvent, context); } await LambdaBootstrapBuilder.Create((Func>)Handler, new DefaultLambdaJsonSerializer()) .Build() .RunAsync(); ``` ``` app.OnSubscribe("/default/*", (payload) => { // Handle subscribe events // return true to allow subscription // return false or throw to reject subscription return true; }); ``` ## Advanced ### Wildcard patterns and handler precedence You can use wildcard patterns to create catch-all handlers for multiple channels or namespaces. This is particularly useful for centralizing logic that applies to multiple channels. When an event matches with multiple handlers, the most specific pattern takes precedence. ``` app.OnPublish("/default/channel1", (payload) => { // This handler will be called for events on /default/channel1 return payload; }); app.OnPublish("/default/*", (payload) => { // This handler will be called for all channels in the default namespace // EXCEPT for /default/channel1 which has a more specific handler return payload; }); app.OnPublish("/*", (payload) => { # This handler will be called for all channels in all namespaces # EXCEPT for those that have more specific handlers return payload; }); ``` Supported wildcard patterns Only the following patterns are supported: - `/namespace/*` - Matches all channels in the specified namespace - `/*` - Matches all channels in all namespaces Patterns like `/namespace/channel*` or `/namespace/*/subpath` are not supported. More specific routes will always take precedence over less specific ones. For example, `/default/channel1` will take precedence over `/default/*`, which will take precedence over `/*`. ### Aggregated processing Aggregate Processing `OnPublishAggregate()` and `OnPublishAggregateAsync()`, receives a list of all events, requiring you to manage the response format. Ensure your response includes results for each event in the expected [AppSync Request and Response Format](#appsync-request-and-response-format). In some scenarios, you might want to process all events for a channel as a batch rather than individually. This is useful when you need to: - Optimize database operations by making a single batch query - Ensure all events are processed together or not at all - Apply custom error handling logic for the entire batch ``` app.OnPublishAggregate("/default/channel", (payload) => { var evt = new List(); foreach (var item in payload.Events) { if (item.Payload["eventType"].ToString() == "data_2") { pd.Payload["message"] = "Hello from /default/channel2 with data_2"; pd.Payload["data"] = new Dictionary { { "key", "value" } }; } evt.Add(pd); } return new AppSyncEventsResponse { Events = evt }; }); ``` ### Handling errors You can filter or reject events by raising exceptions in your resolvers or by formatting the payload according to the expected response structure. This instructs AppSync not to propagate that specific message, so subscribers will not receive it. #### Handling errors with individual items When processing items individually with `OnPublish()` and `OnPublishAsync()`, you can raise an exception to fail a specific item. When an exception is raised, the Event Handler will catch it and include the exception name and message in the response. ``` app.OnPublish("/default/channel", (payload) => { throw new Exception("My custom exception"); }); ``` ``` app.OnPublishAsync("/default/channel", async (payload) => { throw new Exception("My custom exception"); }); ``` ``` { "events":[ { "error": "My custom exception", "id":"1" }, { "payload":{ "data":"data_2" }, "id":"2" } ] } ``` #### Handling errors with batch of items When processing batch of items with `OnPublishAggregate()` and `OnPublishAggregateAsync()`, you must format the payload according the expected response. ``` app.OnPublishAggregate("/default/channel", (payload) => { throw new Exception("My custom exception"); }); ``` ``` app.OnPublishAggregateAsync("/default/channel", async (payload) => { throw new Exception("My custom exception"); }); ``` ``` { "error": "My custom exception" } ``` #### Authorization control Raising `UnauthorizedException` will cause the Lambda invocation to fail. You can also reject the entire payload by raising an `UnauthorizedException`. This prevents Powertools for AWS from processing any messages and causes the Lambda invocation to fail, returning an error to AppSync. - **When working with publish events** Powertools for AWS will stop processing messages and subscribers will not receive any message. - **When working with subscribe events** the subscription won't be established. ``` app.OnPublish("/default/channel", (payload) => { throw new UnauthorizedException("My custom exception"); }); ``` ### Accessing Lambda context and event You can access to the original Lambda event or context for additional information. These are accessible via the app instance: ``` app.OnPublish("/default/channel", (payload, ctx) => { payload["functionName"] = ctx.FunctionName; return payload; }); ``` ## Event Handler workflow #### Working with single items ``` sequenceDiagram participant Client participant AppSync participant Lambda participant EventHandler note over Client,EventHandler: Individual Event Processing (aggregate=False) Client->>+AppSync: Send multiple events to channel AppSync->>+Lambda: Invoke Lambda with batch of events Lambda->>+EventHandler: Process events with aggregate=False loop For each event in batch EventHandler->>EventHandler: Process individual event end EventHandler-->>-Lambda: Return array of processed events Lambda-->>-AppSync: Return event-by-event responses AppSync-->>-Client: Report individual event statuses ``` #### Working with aggregated items ``` sequenceDiagram participant Client participant AppSync participant Lambda participant EventHandler note over Client,EventHandler: Aggregate Processing Workflow Client->>+AppSync: Send multiple events to channel AppSync->>+Lambda: Invoke Lambda with batch of events Lambda->>+EventHandler: Process events with aggregate=True EventHandler->>EventHandler: Batch of events EventHandler->>EventHandler: Process entire batch at once EventHandler->>EventHandler: Format response for each event EventHandler-->>-Lambda: Return aggregated results Lambda-->>-AppSync: Return success responses AppSync-->>-Client: Confirm all events processed ``` #### Authorization fails for publish ``` sequenceDiagram participant Client participant AppSync participant Lambda participant EventHandler note over Client,EventHandler: Publish Event Authorization Flow Client->>AppSync: Publish message to channel AppSync->>Lambda: Invoke Lambda with publish event Lambda->>EventHandler: Process publish event alt Authorization Failed EventHandler->>EventHandler: Authorization check fails EventHandler->>Lambda: Raise UnauthorizedException Lambda->>AppSync: Return error response AppSync--xClient: Message not delivered AppSync--xAppSync: No distribution to subscribers else Authorization Passed EventHandler->>Lambda: Return successful response Lambda->>AppSync: Return processed event AppSync->>Client: Acknowledge message AppSync->>AppSync: Distribute to subscribers end ``` #### Authorization fails for subscribe ``` sequenceDiagram participant Client participant AppSync participant Lambda participant EventHandler note over Client,EventHandler: Subscribe Event Authorization Flow Client->>AppSync: Request subscription to channel AppSync->>Lambda: Invoke Lambda with subscribe event Lambda->>EventHandler: Process subscribe event alt Authorization Failed EventHandler->>EventHandler: Authorization check fails EventHandler->>Lambda: Raise UnauthorizedException Lambda->>AppSync: Return error response AppSync--xClient: Subscription denied (HTTP 403) else Authorization Passed EventHandler->>Lambda: Return successful response Lambda->>AppSync: Return authorization success AppSync->>Client: Subscription established end ``` ## Testing your code You can test your event handlers by passing a mocked or actual AppSync Events Lambda event. ### Testing publish events ``` [Fact] public void Should_Return_Unchanged_Payload() { // Arrange var lambdaContext = new TestLambdaContext(); var app = new AppSyncEventsResolver(); app.OnPublish("/default/channel", payload => { // Handle channel events return payload; }); // Act var result = app.Resolve(_appSyncEvent, lambdaContext); // Assert Assert.Equal("123", result.Events[0].Id); Assert.Equal("test data", result.Events[0].Payload?["data"].ToString()); } ``` ``` { "identity":"None", "result":"None", "request":{ "headers": { "x-forwarded-for": "1.1.1.1, 2.2.2.2", "cloudfront-viewer-country": "US", "cloudfront-is-tablet-viewer": "false", "via": "2.0 xxxxxxxxxxxxxxxx.cloudfront.net (CloudFront)", "cloudfront-forwarded-proto": "https", "origin": "https://us-west-1.console.aws.amazon.com", "content-length": "217", "accept-language": "en-US,en;q=0.9", "host": "xxxxxxxxxxxxxxxx.appsync-api.us-west-1.amazonaws.com", "x-forwarded-proto": "https", "user-agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/85.0.4183.83 Safari/537.36", "accept": "*/*", "cloudfront-is-mobile-viewer": "false", "cloudfront-is-smarttv-viewer": "false", "accept-encoding": "gzip, deflate, br", "referer": "https://us-west-1.console.aws.amazon.com/appsync/home?region=us-west-1", "content-type": "application/json", "sec-fetch-mode": "cors", "x-amz-cf-id": "3aykhqlUwQeANU-HGY7E_guV5EkNeMMtwyOgiA==", "x-amzn-trace-id": "Root=1-5f512f51-fac632066c5e848ae714", "authorization": "eyJraWQiOiJScWFCSlJqYVJlM0hrSnBTUFpIcVRXazNOW...", "sec-fetch-dest": "empty", "x-amz-user-agent": "AWS-Console-AppSync/", "cloudfront-is-desktop-viewer": "true", "sec-fetch-site": "cross-site", "x-forwarded-port": "443" }, "domainName":"None" }, "info":{ "channel":{ "path":"/default/channel", "segments":[ "default", "channel" ] }, "channelNamespace":{ "name":"default" }, "operation":"PUBLISH" }, "error":"None", "prev":"None", "stash":{ }, "outErrors":[ ], "events":[ { "payload":{ "data": "test data" }, "id":"123" } ] } ``` ### Testing subscribe events ``` [Fact] public async Task Should_Authorize_Subscription() { // Arrange var lambdaContext = new TestLambdaContext(); var app = new AppSyncEventsResolver(); app.OnSubscribeAsync("/default/*", async (info) => true); var subscribeEvent = new AppSyncEventsRequest { Info = new Information { Channel = new Channel { Path = "/default/channel", Segments = ["default", "channel"] }, Operation = AppSyncEventsOperation.Subscribe, ChannelNamespace = new ChannelNamespace { Name = "default" } } }; // Act var result = await app.ResolveAsync(subscribeEvent, lambdaContext); // Assert Assert.Null(result); } ``` The batch processing utility handles partial failures when processing batches from Amazon SQS, Amazon Kinesis Data Streams, and Amazon DynamoDB Streams. ``` stateDiagram-v2 direction LR BatchSource: Amazon SQS

Amazon Kinesis Data Streams

Amazon DynamoDB Streams

LambdaInit: Lambda invocation BatchProcessor: Batch Processor RecordHandler: Record Handler function YourLogic: Your logic to process each batch item LambdaResponse: Lambda response BatchSource --> LambdaInit LambdaInit --> BatchProcessor BatchProcessor --> RecordHandler state BatchProcessor { [*] --> RecordHandler: Your function RecordHandler --> YourLogic } RecordHandler --> BatchProcessor: Collect results BatchProcessor --> LambdaResponse: Report items that failed processing ``` ## Key features - Reports batch item failures to reduce number of retries for a record upon errors - Simple interface to process each batch record - Bring your own batch processor - Parallel processing ## Background When using SQS, Kinesis Data Streams, or DynamoDB Streams as a Lambda event source, your Lambda functions are triggered with a batch of messages. If your function fails to process any message from the batch, the entire batch returns to your queue or stream. This same batch is then retried until either condition happens first: **a)** your Lambda function returns a successful response, **b)** record reaches maximum retry attempts, or **c)** when records expire. ``` journey section Conditions Successful response: 5: Success Maximum retries: 3: Failure Records expired: 1: Failure ``` This behavior changes when you enable Report Batch Item Failures feature in your Lambda function event source configuration: - [**SQS queues**](#sqs-standard). Only messages reported as failure will return to the queue for a retry, while successful ones will be deleted. - [**Kinesis data streams**](#kinesis-and-dynamodb-streams) and [**DynamoDB streams**](#kinesis-and-dynamodb-streams). Single reported failure will use its sequence number as the stream checkpoint. Multiple reported failures will use the lowest sequence number as checkpoint. Warning: This utility lowers the chance of processing records more than once; it does not guarantee it We recommend implementing processing logic in an [idempotent manner](../idempotency/) wherever possible. You can find more details on how Lambda works with either [SQS](https://docs.aws.amazon.com/lambda/latest/dg/with-sqs.html), [Kinesis](https://docs.aws.amazon.com/lambda/latest/dg/with-kinesis.html), or [DynamoDB](https://docs.aws.amazon.com/lambda/latest/dg/with-ddb.html) in the AWS Documentation. ## Installation You should install with NuGet: ``` Install-Package AWS.Lambda.Powertools.BatchProcessing ``` Or via the .NET Core command line interface: ``` dotnet add package AWS.Lambda.Powertools.BatchProcessing ``` ## Getting started For this feature to work, you need to **(1)** configure your Lambda function event source to use `ReportBatchItemFailures`, and **(2)** return [a specific response](https://docs.aws.amazon.com/lambda/latest/dg/with-sqs.html#services-sqs-batchfailurereporting) to report which records failed to be processed. You use your preferred deployment framework to set the correct configuration while this utility handles the correct response to be returned. Batch processing can be configured with the settings bellow: | Setting | Description | Environment variable | Default | | --- | --- | --- | --- | | **Error Handling Policy** | The error handling policy to apply during batch processing. | `POWERTOOLS_BATCH_ERROR_HANDLING_POLICY` | `DeriveFromEvent` | | **Parallel Enabled** | Controls if parallel processing of batch items is enabled. | `POWERTOOLS_BATCH_PARALLEL_ENABLED` | `false` | | **Max Degree of Parallelism** | The maximum degree of parallelism to apply if parallel processing is enabled. | `POWERTOOLS_BATCH_MAX_DEGREE_OF_PARALLELISM` | `1` | | **Throw on Full Batch Failure** | Controls if a `BatchProcessingException` is thrown on full batch failure. | `POWERTOOLS_BATCH_THROW_ON_FULL_BATCH_FAILURE` | `true` | ### Required resources The remaining sections of the documentation will rely on these samples. For completeness, this demonstrates IAM permissions and Dead Letter Queue where batch records will be sent after 2 retries were attempted. You do not need any additional IAM permissions to use this utility, except for what each event source requires. ``` AWSTemplateFormatVersion: "2010-09-09" Transform: AWS::Serverless-2016-10-31 Description: Example project demoing SQS Queue processing using the Batch Processing Utility in Powertools for AWS Lambda (.NET) Globals: Function: Timeout: 20 Runtime: dotnet8 MemorySize: 1024 Environment: Variables: POWERTOOLS_SERVICE_NAME: powertools-dotnet-sample-batch-sqs POWERTOOLS_LOG_LEVEL: Debug POWERTOOLS_LOGGER_CASE: PascalCase POWERTOOLS_BATCH_ERROR_HANDLING_POLICY: DeriveFromEvent POWERTOOLS_BATCH_MAX_DEGREE_OF_PARALLELISM: 1 POWERTOOLS_BATCH_PARALLEL_ENABLED : false POWERTOOLS_BATCH_THROW_ON_FULL_BATCH_FAILURE: true Resources: # -------------- # KMS key for encrypted messages / records CustomerKey: Type: AWS::KMS::Key Properties: Description: KMS key for encrypted queues Enabled: true KeyPolicy: Version: "2012-10-17" Statement: - Sid: Enable IAM User Permissions Effect: Allow Principal: AWS: !Sub "arn:aws:iam::${AWS::AccountId}:root" Action: "kms:*" Resource: "*" - Sid: Allow AWS Lambda to use the key Effect: Allow Principal: Service: lambda.amazonaws.com Action: - kms:Decrypt - kms:GenerateDataKey Resource: "*" CustomerKeyAlias: Type: AWS::KMS::Alias Properties: AliasName: !Sub alias/${AWS::StackName}-kms-key TargetKeyId: !Ref CustomerKey # -------------- # Batch Processing for SQS Queue SqsDeadLetterQueue: Type: AWS::SQS::Queue Properties: KmsMasterKeyId: !Ref CustomerKey SqsQueue: Type: AWS::SQS::Queue Properties: RedrivePolicy: deadLetterTargetArn: !GetAtt SqsDeadLetterQueue.Arn maxReceiveCount: 2 KmsMasterKeyId: !Ref CustomerKey SqsBatchProcessorFunction: Type: AWS::Serverless::Function Properties: CodeUri: ./src/HelloWorld/ Handler: HelloWorld::HelloWorld.Function::SqsHandlerUsingAttribute Policies: - Statement: - Sid: DlqPermissions Effect: Allow Action: - sqs:SendMessage - sqs:SendMessageBatch Resource: !GetAtt SqsDeadLetterQueue.Arn - Sid: KmsKeyPermissions Effect: Allow Action: - kms:Decrypt - kms:GenerateDataKey Resource: !GetAtt CustomerKey.Arn Events: SqsBatch: Type: SQS Properties: BatchSize: 5 Enabled: true FunctionResponseTypes: - ReportBatchItemFailures Queue: !GetAtt SqsQueue.Arn SqsBatchProcessorFunctionLogGroup: Type: AWS::Logs::LogGroup Properties: LogGroupName: !Sub "/aws/lambda/${SqsBatchProcessorFunction}" RetentionInDays: 7 Outputs: SqsQueueUrl: Description: "SQS Queue URL" Value: !Ref SqsQueue ``` ``` AWSTemplateFormatVersion: "2010-09-09" Transform: AWS::Serverless-2016-10-31 Description: Example project demoing Kinesis Data Streams processing using the Batch Processing Utility in Powertools for AWS Lambda (.NET) Globals: Function: Timeout: 20 Runtime: dotnet8 MemorySize: 1024 Environment: Variables: POWERTOOLS_SERVICE_NAME: powertools-dotnet-sample-batch-kinesis POWERTOOLS_LOG_LEVEL: Debug POWERTOOLS_LOGGER_CASE: PascalCase POWERTOOLS_BATCH_ERROR_HANDLING_POLICY: DeriveFromEvent POWERTOOLS_BATCH_MAX_DEGREE_OF_PARALLELISM: 1 POWERTOOLS_BATCH_PARALLEL_ENABLED : false POWERTOOLS_BATCH_THROW_ON_FULL_BATCH_FAILURE: true Resources: # -------------- # KMS key for encrypted messages / records CustomerKey: Type: AWS::KMS::Key Properties: Description: KMS key for encrypted queues Enabled: true KeyPolicy: Version: "2012-10-17" Statement: - Sid: Enable IAM User Permissions Effect: Allow Principal: AWS: !Sub "arn:aws:iam::${AWS::AccountId}:root" Action: "kms:*" Resource: "*" - Sid: Allow AWS Lambda to use the key Effect: Allow Principal: Service: lambda.amazonaws.com Action: - kms:Decrypt - kms:GenerateDataKey Resource: "*" CustomerKeyAlias: Type: AWS::KMS::Alias Properties: AliasName: !Sub alias/${AWS::StackName}-kms-key TargetKeyId: !Ref CustomerKey # -------------- # Batch Processing for Kinesis Data Stream KinesisStreamDeadLetterQueue: Type: AWS::SQS::Queue Properties: KmsMasterKeyId: !Ref CustomerKey KinesisStream: Type: AWS::Kinesis::Stream Properties: ShardCount: 1 StreamEncryption: EncryptionType: KMS KeyId: !Ref CustomerKey KinesisStreamConsumer: Type: AWS::Kinesis::StreamConsumer Properties: ConsumerName: powertools-dotnet-sample-batch-kds-consumer StreamARN: !GetAtt KinesisStream.Arn KinesisBatchProcessorFunction: Type: AWS::Serverless::Function Properties: Policies: - Statement: - Sid: KinesisStreamConsumerPermissions Effect: Allow Action: - kinesis:DescribeStreamConsumer Resource: - !GetAtt KinesisStreamConsumer.ConsumerARN - Sid: DlqPermissions Effect: Allow Action: - sqs:SendMessage - sqs:SendMessageBatch Resource: !GetAtt KinesisStreamDeadLetterQueue.Arn - Sid: KmsKeyPermissions Effect: Allow Action: - kms:Decrypt - kms:GenerateDataKey Resource: !GetAtt CustomerKey.Arn CodeUri: ./src/HelloWorld/ Handler: HelloWorld::HelloWorld.Function::KinesisEventHandlerUsingAttribute Events: Kinesis: Type: Kinesis Properties: BatchSize: 5 BisectBatchOnFunctionError: true DestinationConfig: OnFailure: Destination: !GetAtt KinesisStreamDeadLetterQueue.Arn Enabled: true FunctionResponseTypes: - ReportBatchItemFailures MaximumRetryAttempts: 2 ParallelizationFactor: 1 StartingPosition: LATEST Stream: !GetAtt KinesisStreamConsumer.ConsumerARN KinesisBatchProcessorFunctionLogGroup: Type: AWS::Logs::LogGroup Properties: LogGroupName: !Sub "/aws/lambda/${KinesisBatchProcessorFunction}" RetentionInDays: 7 Outputs: KinesisStreamArn: Description: "Kinesis Stream ARN" Value: !GetAtt KinesisStream.Arn ``` ``` AWSTemplateFormatVersion: "2010-09-09" Transform: AWS::Serverless-2016-10-31 Description: Example project demoing DynamoDB Streams processing using the Batch Processing Utility in Powertools for AWS Lambda (.NET) Globals: Function: Timeout: 20 Runtime: dotnet8 MemorySize: 1024 Environment: Variables: POWERTOOLS_SERVICE_NAME: powertools-dotnet-sample-batch-ddb POWERTOOLS_LOG_LEVEL: Debug POWERTOOLS_LOGGER_CASE: PascalCase POWERTOOLS_BATCH_ERROR_HANDLING_POLICY: DeriveFromEvent POWERTOOLS_BATCH_MAX_DEGREE_OF_PARALLELISM: 1 POWERTOOLS_BATCH_PARALLEL_ENABLED : false POWERTOOLS_BATCH_THROW_ON_FULL_BATCH_FAILURE: true Resources: # -------------- # KMS key for encrypted messages / records CustomerKey: Type: AWS::KMS::Key Properties: Description: KMS key for encrypted queues Enabled: true KeyPolicy: Version: "2012-10-17" Statement: - Sid: Enable IAM User Permissions Effect: Allow Principal: AWS: !Sub "arn:aws:iam::${AWS::AccountId}:root" Action: "kms:*" Resource: "*" - Sid: Allow AWS Lambda to use the key Effect: Allow Principal: Service: lambda.amazonaws.com Action: - kms:Decrypt - kms:GenerateDataKey Resource: "*" CustomerKeyAlias: Type: AWS::KMS::Alias Properties: AliasName: !Sub alias/${AWS::StackName}-kms-key TargetKeyId: !Ref CustomerKey # -------------- # Batch Processing for DynamoDb (DDB) Stream DdbStreamDeadLetterQueue: Type: AWS::SQS::Queue Properties: KmsMasterKeyId: !Ref CustomerKey DdbTable: Type: AWS::DynamoDB::Table Properties: BillingMode: PAY_PER_REQUEST AttributeDefinitions: - AttributeName: id AttributeType: S KeySchema: - AttributeName: id KeyType: HASH StreamSpecification: StreamViewType: NEW_AND_OLD_IMAGES DdbStreamBatchProcessorFunction: Type: AWS::Serverless::Function Properties: CodeUri: ./src/HelloWorld/ Handler: HelloWorld::HelloWorld.Function::DynamoDbStreamHandlerUsingAttribute Policies: - AWSLambdaDynamoDBExecutionRole - Statement: - Sid: DlqPermissions Effect: Allow Action: - sqs:SendMessage - sqs:SendMessageBatch Resource: !GetAtt DdbStreamDeadLetterQueue.Arn - Sid: KmsKeyPermissions Effect: Allow Action: - kms:GenerateDataKey Resource: !GetAtt CustomerKey.Arn Events: Stream: Type: DynamoDB Properties: BatchSize: 5 BisectBatchOnFunctionError: true DestinationConfig: OnFailure: Destination: !GetAtt DdbStreamDeadLetterQueue.Arn Enabled: true FunctionResponseTypes: - ReportBatchItemFailures MaximumRetryAttempts: 2 ParallelizationFactor: 1 StartingPosition: LATEST Stream: !GetAtt DdbTable.StreamArn DdbStreamBatchProcessorFunctionLogGroup: Type: AWS::Logs::LogGroup Properties: LogGroupName: !Sub "/aws/lambda/${DdbStreamBatchProcessorFunction}" RetentionInDays: 7 Outputs: DdbTableName: Description: "DynamoDB Table Name" Value: !Ref DdbTable ``` ### Processing messages from SQS #### Using Handler decorator Processing batches from SQS using Lambda handler decorator works in three stages: 1. Decorate your handler with **`BatchProcessor`** attribute 1. Create a class that implements **`ISqsRecordHandler`** interface and the HandleAsync method. 1. Pass the type of that class to **`RecordHandler`** property of the **`BatchProcessor`** attribute 1. Return **`BatchItemFailuresResponse`** from Lambda handler using **`SqsBatchProcessor.Result.BatchItemFailuresResponse`** ``` public class CustomSqsRecordHandler : ISqsRecordHandler // (1)! { public async Task HandleAsync(SQSEvent.SQSMessage record, CancellationToken cancellationToken) { /* * Your business logic. * If an exception is thrown, the item will be marked as a partial batch item failure. */ var product = JsonSerializer.Deserialize(record.Body); if (product.Id == 4) // (2)! { throw new ArgumentException("Error on id 4"); } return await Task.FromResult(RecordHandlerResult.None); // (3)! } } [BatchProcessor(RecordHandler = typeof(CustomSqsRecordHandler))] public BatchItemFailuresResponse HandlerUsingAttribute(SQSEvent _) { return SqsBatchProcessor.Result.BatchItemFailuresResponse; // (4)! } ``` 1. **Step 1**. Creates a class that implements ISqsRecordHandler interface and the HandleAsync method. 1. **Step 2**. You can have custom logic inside the record handler and throw exceptions that will cause this message to fail 1. **Step 3**. RecordHandlerResult can return empty (None) or some data. 1. **Step 4**. Lambda function returns the Partial batch response ``` { "Records": [ { "messageId": "059f36b4-87a3-44ab-83d2-661975830a7d", "receiptHandle": "AQEBwJnKyrHigUMZj6rYigCgxlaS3SLy0a", "body": "{\"Id\":1,\"Name\":\"product-4\",\"Price\":14}", "attributes": { "ApproximateReceiveCount": "1", "SentTimestamp": "1545082649183", "SenderId": "AIDAIENQZJOLO23YVJ4VO", "ApproximateFirstReceiveTimestamp": "1545082649185" }, "messageAttributes": {}, "md5OfBody": "e4e68fb7bd0e697a0ae8f1bb342846b3", "eventSource": "aws:sqs", "eventSourceARN": "arn:aws:sqs:us-east-2: 123456789012:my-queue", "awsRegion": "us-east-1" }, { "messageId": "244fc6b4-87a3-44ab-83d2-361172410c3a", "receiptHandle": "AQEBwJnKyrHigUMZj6rYigCgxlaS3SLy0a", "body": "fail", "attributes": { "ApproximateReceiveCount": "1", "SentTimestamp": "1545082649183", "SenderId": "AIDAIENQZJOLO23YVJ4VO", "ApproximateFirstReceiveTimestamp": "1545082649185" }, "messageAttributes": {}, "md5OfBody": "e4e68fb7bd0e697a0ae8f1bb342846b3", "eventSource": "aws:sqs", "eventSourceARN": "arn:aws:sqs:us-east-2: 123456789012:my-queue", "awsRegion": "us-east-1" }, { "messageId": "213f4fd3-84a4-4667-a1b9-c277964197d9", "receiptHandle": "AQEBwJnKyrHigUMZj6rYigCgxlaS3SLy0a", "body": "{\"Id\":4,\"Name\":\"product-4\",\"Price\":14}", "attributes": { "ApproximateReceiveCount": "1", "SentTimestamp": "1545082649183", "SenderId": "AIDAIENQZJOLO23YVJ4VO", "ApproximateFirstReceiveTimestamp": "1545082649185" }, "messageAttributes": {}, "md5OfBody": "e4e68fb7bd0e697a0ae8f1bb342846b3", "eventSource": "aws:sqs", "eventSourceARN": "arn:aws:sqs:us-east-2: 123456789012:my-queue", "awsRegion": "us-east-1" }, ] } ``` The second record failed to be processed, therefore the processor added its message ID in the response. ``` { "batchItemFailures": [ { "itemIdentifier": "244fc6b4-87a3-44ab-83d2-361172410c3a" }, { "itemIdentifier": "213f4fd3-84a4-4667-a1b9-c277964197d9" } ] } ``` #### FIFO queues When using [SQS FIFO queues](https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/FIFO-queues.html), we will stop processing messages after the first failure, and return all failed and unprocessed messages in `batchItemFailures`. This helps preserve the ordering of messages in your queue. Powertools automatically detects a FIFO queue. ### Processing messages from Kinesis Processing batches from Kinesis using Lambda handler decorator works in three stages: 1. Decorate your handler with **`BatchProcessor`** attribute 1. Create a class that implements **`IKinesisEventRecordHandler`** interface and the HandleAsync method. 1. Pass the type of that class to **`RecordHandler`** property of the **`BatchProcessor`** attribute 1. Return **`BatchItemFailuresResponse`** from Lambda handler using **`KinesisEventBatchProcessor.Result.BatchItemFailuresResponse`** ``` internal class CustomKinesisEventRecordHandler : IKinesisEventRecordHandler // (1)! { public async Task HandleAsync(KinesisEvent.KinesisEventRecord record, CancellationToken cancellationToken) { var product = JsonSerializer.Deserialize(record.Kinesis.Data); if (product.Id == 4) // (2)! { throw new ArgumentException("Error on id 4"); } return await Task.FromResult(RecordHandlerResult.None); // (3)! } } [BatchProcessor(RecordHandler = typeof(CustomKinesisEventRecordHandler))] public BatchItemFailuresResponse HandlerUsingAttribute(KinesisEvent _) { return KinesisEventBatchProcessor.Result.BatchItemFailuresResponse; // (4)! } ``` 1. **Step 1**. Creates a class that implements the IKinesisEventRecordHandler interface and the HandleAsync method. 1. **Step 2**. You can have custom logic inside the record handler and throw exceptions that will cause this message to fail 1. **Step 3**. RecordHandlerResult can return empty (None) or some data. 1. **Step 4**. Lambda function returns the Partial batch response ``` { "Records": [ { "messageId": "059f36b4-87a3-44ab-83d2-661975830a7d", "receiptHandle": "AQEBwJnKyrHigUMZj6rYigCgxlaS3SLy0a", "body": "{\"Id\":1,\"Name\":\"product-4\",\"Price\":14}", "attributes": { "ApproximateReceiveCount": "1", "SentTimestamp": "1545082649183", "SenderId": "AIDAIENQZJOLO23YVJ4VO", "ApproximateFirstReceiveTimestamp": "1545082649185" }, "messageAttributes": {}, "md5OfBody": "e4e68fb7bd0e697a0ae8f1bb342846b3", "eventSource": "aws:sqs", "eventSourceARN": "arn:aws:sqs:us-east-2: 123456789012:my-queue", "awsRegion": "us-east-1" }, { "messageId": "244fc6b4-87a3-44ab-83d2-361172410c3a", "receiptHandle": "AQEBwJnKyrHigUMZj6rYigCgxlaS3SLy0a", "body": "fail", "attributes": { "ApproximateReceiveCount": "1", "SentTimestamp": "1545082649183", "SenderId": "AIDAIENQZJOLO23YVJ4VO", "ApproximateFirstReceiveTimestamp": "1545082649185" }, "messageAttributes": {}, "md5OfBody": "e4e68fb7bd0e697a0ae8f1bb342846b3", "eventSource": "aws:sqs", "eventSourceARN": "arn:aws:sqs:us-east-2: 123456789012:my-queue", "awsRegion": "us-east-1" }, { "messageId": "213f4fd3-84a4-4667-a1b9-c277964197d9", "receiptHandle": "AQEBwJnKyrHigUMZj6rYigCgxlaS3SLy0a", "body": "{\"Id\":4,\"Name\":\"product-4\",\"Price\":14}", "attributes": { "ApproximateReceiveCount": "1", "SentTimestamp": "1545082649183", "SenderId": "AIDAIENQZJOLO23YVJ4VO", "ApproximateFirstReceiveTimestamp": "1545082649185" }, "messageAttributes": {}, "md5OfBody": "e4e68fb7bd0e697a0ae8f1bb342846b3", "eventSource": "aws:sqs", "eventSourceARN": "arn:aws:sqs:us-east-2: 123456789012:my-queue", "awsRegion": "us-east-1" }, ] } ``` The second record failed to be processed, therefore the processor added its message ID in the response. ``` { "batchItemFailures": [ { "itemIdentifier": "244fc6b4-87a3-44ab-83d2-361172410c3a" }, { "itemIdentifier": "213f4fd3-84a4-4667-a1b9-c277964197d9" } ] } ``` ### Processing messages from DynamoDB Processing batches from DynamoDB Streams using Lambda handler decorator works in three stages: 1. Decorate your handler with **`BatchProcessor`** attribute 1. Create a class that implements **`IDynamoDbStreamRecordHandler`** and the HandleAsync method. 1. Pass the type of that class to **`RecordHandler`** property of the **`BatchProcessor`** attribute 1. Return **`BatchItemFailuresResponse`** from Lambda handler using **`DynamoDbStreamBatchProcessor.Result.BatchItemFailuresResponse`** ``` internal class CustomDynamoDbStreamRecordHandler : IDynamoDbStreamRecordHandler // (1)! { public async Task HandleAsync(DynamoDBEvent.DynamodbStreamRecord record, CancellationToken cancellationToken) { var product = JsonSerializer.Deserialize(record.Dynamodb.NewImage["Product"].S); if (product.Id == 4) // (2)! { throw new ArgumentException("Error on id 4"); } return await Task.FromResult(RecordHandlerResult.None); // (3)! } } [BatchProcessor(RecordHandler = typeof(CustomDynamoDbStreamRecordHandler))] public BatchItemFailuresResponse HandlerUsingAttribute(DynamoDBEvent _) { return DynamoDbStreamBatchProcessor.Result.BatchItemFailuresResponse; // (4)! } ``` 1. **Step 1**. Creates a class that implements the IDynamoDbStreamRecordHandler and the HandleAsync method. 1. **Step 2**. You can have custom logic inside the record handler and throw exceptions that will cause this message to fail 1. **Step 3**. RecordHandlerResult can return empty (None) or some data. 1. **Step 4**. Lambda function returns the Partial batch response ``` { "Records": [ { "eventID": "1", "eventVersion": "1.0", "dynamodb": { "Keys": { "Id": { "N": "101" } }, "NewImage": { "Product": { "S": "{\"Id\":1,\"Name\":\"product-name\",\"Price\":14}" } }, "StreamViewType": "NEW_AND_OLD_IMAGES", "SequenceNumber": "3275880929", "SizeBytes": 26 }, "awsRegion": "us-west-2", "eventName": "INSERT", "eventSourceARN": "eventsource_arn", "eventSource": "aws:dynamodb" }, { "eventID": "1", "eventVersion": "1.0", "dynamodb": { "Keys": { "Id": { "N": "101" } }, "NewImage": { "Product": { "S": "fail" } }, "StreamViewType": "NEW_AND_OLD_IMAGES", "SequenceNumber": "8640712661", "SizeBytes": 26 }, "awsRegion": "us-west-2", "eventName": "INSERT", "eventSourceARN": "eventsource_arn", "eventSource": "aws:dynamodb" } ] } ``` The second record failed to be processed, therefore the processor added its message ID in the response. ``` { "batchItemFailures": [ { "itemIdentifier": "8640712661" } ] } ``` ### Error handling By default, we catch any exception raised by your custom record handler HandleAsync method (ISqsRecordHandler, IKinesisEventRecordHandler, IDynamoDbStreamRecordHandler). This allows us to **(1)** continue processing the batch, **(2)** collect each batch item that failed processing, and **(3)** return the appropriate response correctly without failing your Lambda function execution. ``` public class CustomSqsRecordHandler : ISqsRecordHandler // (1)! { public async Task HandleAsync(SQSEvent.SQSMessage record, CancellationToken cancellationToken) { /* * Your business logic. * If an exception is thrown, the item will be marked as a partial batch item failure. */ var product = JsonSerializer.Deserialize(record.Body); if (product.Id == 4) // (2)! { throw new ArgumentException("Error on id 4"); } return await Task.FromResult(RecordHandlerResult.None); // (3)! } } ``` ``` { "Records": [ { "messageId": "059f36b4-87a3-44ab-83d2-661975830a7d", "receiptHandle": "AQEBwJnKyrHigUMZj6rYigCgxlaS3SLy0a", "body": "{\"Id\":1,\"Name\":\"product-4\",\"Price\":14}", "attributes": { "ApproximateReceiveCount": "1", "SentTimestamp": "1545082649183", "SenderId": "AIDAIENQZJOLO23YVJ4VO", "ApproximateFirstReceiveTimestamp": "1545082649185" }, "messageAttributes": {}, "md5OfBody": "e4e68fb7bd0e697a0ae8f1bb342846b3", "eventSource": "aws:sqs", "eventSourceARN": "arn:aws:sqs:us-east-2: 123456789012:my-queue", "awsRegion": "us-east-1" }, { "messageId": "244fc6b4-87a3-44ab-83d2-361172410c3a", "receiptHandle": "AQEBwJnKyrHigUMZj6rYigCgxlaS3SLy0a", "body": "fail", "attributes": { "ApproximateReceiveCount": "1", "SentTimestamp": "1545082649183", "SenderId": "AIDAIENQZJOLO23YVJ4VO", "ApproximateFirstReceiveTimestamp": "1545082649185" }, "messageAttributes": {}, "md5OfBody": "e4e68fb7bd0e697a0ae8f1bb342846b3", "eventSource": "aws:sqs", "eventSourceARN": "arn:aws:sqs:us-east-2: 123456789012:my-queue", "awsRegion": "us-east-1" }, { "messageId": "213f4fd3-84a4-4667-a1b9-c277964197d9", "receiptHandle": "AQEBwJnKyrHigUMZj6rYigCgxlaS3SLy0a", "body": "{\"Id\":4,\"Name\":\"product-4\",\"Price\":14}", "attributes": { "ApproximateReceiveCount": "1", "SentTimestamp": "1545082649183", "SenderId": "AIDAIENQZJOLO23YVJ4VO", "ApproximateFirstReceiveTimestamp": "1545082649185" }, "messageAttributes": {}, "md5OfBody": "e4e68fb7bd0e697a0ae8f1bb342846b3", "eventSource": "aws:sqs", "eventSourceARN": "arn:aws:sqs:us-east-2: 123456789012:my-queue", "awsRegion": "us-east-1" }, ] } ``` The second record failed to be processed, therefore the processor added its message ID in the response. ``` { "batchItemFailures": [ { "itemIdentifier": "244fc6b4-87a3-44ab-83d2-361172410c3a" }, { "itemIdentifier": "213f4fd3-84a4-4667-a1b9-c277964197d9" } ] } ``` #### Error Handling Policy You can specify the error handling policy applied during batch processing. `ErrorHandlingPolicy` is used to control the error handling policy of the batch item processing. With a value of `DeriveFromEvent` (default), the specific BatchProcessor, determines the policy based on the incoming event. For example, the `SqsBatchProcessor` looks at the EventSourceArn to determine if the ErrorHandlingPolicy should be `StopOnFirstBatchItemFailure` (for FIFO queues) or `ContinueOnBatchItemFailure` (for standard queues). For `StopOnFirstBatchItemFailure` the batch processor stops processing and marks any remaining records as batch item failures. For `ContinueOnBatchItemFailure` the batch processor continues processing batch items regardless of item failures. | Policy | Description | | --- | --- | | **DeriveFromEvent** | Auto-derive the policy based on the event. | | **ContinueOnBatchItemFailure** | Continue processing regardless of whether other batch items fails during processing. | | **StopOnFirstBatchItemFailure** | Stop processing other batch items after the first batch item has failed processing. This is useful to preserve ordered processing of events. | Note When using **StopOnFirstBatchItemFailure** and parallel processing is enabled, all batch items already scheduled to be processed, will be allowed to complete before the batch processing stops. Therefore, if order is important, it is recommended to use sequential (non-parallel) processing together with this value." To change the default error handling policy, you can set the **`POWERTOOLS_BATCH_ERROR_HANDLING_POLICY`** Environment Variable. Another approach is to decorate the handler and use one of the policies in the **`ErrorHandlingPolicy`** Enum property of the **`BatchProcessor`** attribute ``` [BatchProcessor(RecordHandler = typeof(CustomDynamoDbStreamRecordHandler), ErrorHandlingPolicy = BatchProcessorErrorHandlingPolicy.StopOnFirstBatchItemFailure)] public BatchItemFailuresResponse HandlerUsingAttribute(DynamoDBEvent _) { return DynamoDbStreamBatchProcessor.Result.BatchItemFailuresResponse; } ``` ### Partial failure mechanics All records in the batch will be passed to this handler for processing, even if exceptions are thrown - Here's the behaviour after completing the batch: - **All records successfully processed**. We will return an empty list of item failures `{'batchItemFailures': []}`. - **Partial success with some exceptions**. We will return a list of all item IDs/sequence numbers that failed processing. - **All records failed to be processed**. By defaullt, we will throw a `BatchProcessingException` with a list of all exceptions raised during processing to reflect the failure in your operational metrics. However, in some scenarios, this might not be desired. See [Working with full batch failures](#working-with-full-batch-failures) for more information. The following sequence diagrams explain how each Batch processor behaves under different scenarios. #### SQS Standard > Read more about [Batch Failure Reporting feature in AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/with-sqs.html#services-sqs-batchfailurereporting). Sequence diagram to explain how [`BatchProcessor` works](#processing-messages-from-sqs) with SQS Standard queues. ``` sequenceDiagram autonumber participant SQS queue participant Lambda service participant Lambda function Lambda service->>SQS queue: Poll Lambda service->>Lambda function: Invoke (batch event) Lambda function->>Lambda service: Report some failed messages activate SQS queue Lambda service->>SQS queue: Delete successful messages SQS queue-->>SQS queue: Failed messages return Note over SQS queue,Lambda service: Process repeat deactivate SQS queue ``` *SQS mechanism with Batch Item Failures* #### SQS FIFO > Read more about [Batch Failure Reporting feature in AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/with-sqs.html#services-sqs-batchfailurereporting). Sequence diagram to explain how [`SqsFifoPartialProcessor` works](#fifo-queues) with SQS FIFO queues. ``` sequenceDiagram autonumber participant SQS queue participant Lambda service participant Lambda function Lambda service->>SQS queue: Poll Lambda service->>Lambda function: Invoke (batch event) activate Lambda function Lambda function-->Lambda function: Process 2 out of 10 batch items Lambda function--xLambda function: Fail on 3rd batch item Lambda function->>Lambda service: Report 3rd batch item and unprocessed messages as failure deactivate Lambda function activate SQS queue Lambda service->>SQS queue: Delete successful messages (1-2) SQS queue-->>SQS queue: Failed messages return (3-10) deactivate SQS queue ``` *SQS FIFO mechanism with Batch Item Failures* #### Kinesis and DynamoDB Streams > Read more about [Batch Failure Reporting feature](https://docs.aws.amazon.com/lambda/latest/dg/with-kinesis.html#services-kinesis-batchfailurereporting). Sequence diagram to explain how `BatchProcessor` works with both [Kinesis Data Streams](#processing-messages-from-kinesis) and [DynamoDB Streams](#processing-messages-from-dynamodb). For brevity, we will use `Streams` to refer to either services. For theory on stream checkpoints, see this [blog post](https://aws.amazon.com/blogs/compute/optimizing-batch-processing-with-custom-checkpoints-in-aws-lambda/) ``` sequenceDiagram autonumber participant Streams participant Lambda service participant Lambda function Lambda service->>Streams: Poll latest records Lambda service->>Lambda function: Invoke (batch event) activate Lambda function Lambda function-->Lambda function: Process 2 out of 10 batch items Lambda function--xLambda function: Fail on 3rd batch item Lambda function-->Lambda function: Continue processing batch items (4-10) Lambda function->>Lambda service: Report batch item as failure (3) deactivate Lambda function activate Streams Lambda service->>Streams: Checkpoints to sequence number from 3rd batch item Lambda service->>Streams: Poll records starting from updated checkpoint deactivate Streams ``` *Kinesis and DynamoDB streams mechanism with single batch item failure* The behavior changes slightly when there are multiple item failures. Stream checkpoint is updated to the lowest sequence number reported. Note that the batch item sequence number could be different from batch item number in the illustration. ``` sequenceDiagram autonumber participant Streams participant Lambda service participant Lambda function Lambda service->>Streams: Poll latest records Lambda service->>Lambda function: Invoke (batch event) activate Lambda function Lambda function-->Lambda function: Process 2 out of 10 batch items Lambda function--xLambda function: Fail on 3-5 batch items Lambda function-->Lambda function: Continue processing batch items (6-10) Lambda function->>Lambda service: Report batch items as failure (3-5) deactivate Lambda function activate Streams Lambda service->>Streams: Checkpoints to lowest sequence number Lambda service->>Streams: Poll records starting from updated checkpoint deactivate Streams ``` *Kinesis and DynamoDB streams mechanism with multiple batch item failures* ### Advanced #### Using utility outside handler and IoC You can use Batch processing without using the decorator. Calling the **`ProcessAsync`** method on the Instance of the static BatchProcessor (`SqsBatchProcessor`, `DynamoDbStreamBatchProcessor`, `KinesisEventBatchProcessor`) ``` public async Task HandlerUsingUtility(DynamoDBEvent dynamoDbEvent) { var result = await DynamoDbStreamBatchProcessor.Instance.ProcessAsync(dynamoDbEvent, RecordHandler.From(record => { var product = JsonSerializer.Deserialize(record.Dynamodb.NewImage["Product"].S); if (product.GetProperty("Id").GetInt16() == 4) { throw new ArgumentException("Error on 4"); } })); return result.BatchItemFailuresResponse; } ``` To make the handler testable you can use Dependency Injection to resolve the BatchProcessor (`SqsBatchProcessor`, `DynamoDbStreamBatchProcessor`, `KinesisEventBatchProcessor`) instance and then call the **`ProcessAsync`** method. ``` public async Task HandlerUsingUtilityFromIoc(DynamoDBEvent dynamoDbEvent) { var batchProcessor = Services.Provider.GetRequiredService(); var recordHandler = Services.Provider.GetRequiredService(); var result = await batchProcessor.ProcessAsync(dynamoDbEvent, recordHandler); return result.BatchItemFailuresResponse; } ``` ``` public async Task HandlerUsingUtilityFromIoc(DynamoDBEvent dynamoDbEvent, IDynamoDbStreamBatchProcessor batchProcessor, IDynamoDbStreamRecordHandler recordHandler) { var result = await batchProcessor.ProcessAsync(dynamoDbEvent, recordHandler); return result.BatchItemFailuresResponse; } ``` ``` internal class Services { private static readonly Lazy LazyInstance = new(Build); private static ServiceCollection _services; public static IServiceProvider Provider => LazyInstance.Value; public static IServiceProvider Init() { return LazyInstance.Value; } private static IServiceProvider Build() { _services = new ServiceCollection(); _services.AddScoped(); _services.AddScoped(); return _services.BuildServiceProvider(); } } ``` #### Processing messages in parallel You can set the `POWERTOOLS_BATCH_PARALLEL_ENABLED` Environment Variable to `true` or set the property `BatchParallelProcessingEnabled` on the Lambda decorator to process messages concurrently. You can also set `POWERTOOLS_BATCH_MAX_DEGREE_OF_PARALLELISM` Environment Variable to the number of parallelism you which. Note MaxDegreeOfParallelism is used to control the parallelism of the batch item processing. With a value of 1, the processing is done sequentially (default). Sequential processing is recommended when preserving order is important - i.e. with SQS FIFIO queues. With a value > 1, the processing is done in parallel. Doing parallel processing can enable processing to complete faster, i.e., when processing does downstream service calls. With a value of -1, the parallelism is automatically configured to be the vCPU count of the Lambda function. Internally, the Batch Processing Utility utilizes Parallel.ForEachAsync Method and the ParallelOptions.MaxDegreeOfParallelism Property to enable this functionality. When is this useful? Your use case might be able to process multiple records at the same time without conflicting with one another. For example, imagine you need to process multiple loyalty points and incrementally save in a database. While you await the database to confirm your records are saved, you could start processing another request concurrently. The reason this is not the default behaviour is that not all use cases can handle concurrency safely (e.g., loyalty points must be updated in order). ``` [BatchProcessor(RecordHandler = typeof(CustomDynamoDbStreamRecordHandler), BatchParallelProcessingEnabled = true )] public BatchItemFailuresResponse HandlerUsingAttribute(DynamoDBEvent _) { return DynamoDbStreamBatchProcessor.Result.BatchItemFailuresResponse; } ``` #### Working with full batch failures By default, the `BatchProcessor` will throw a `BatchProcessingException` if all records in the batch fail to process. We do this to reflect the failure in your operational metrics. When working with functions that handle batches with a small number of records, or when you use errors as a flow control mechanism, this behavior might not be desirable as your function might generate an unnaturally high number of errors. When this happens, the [Lambda service will scale down the concurrency of your function](https://docs.aws.amazon.com/lambda/latest/dg/services-sqs-errorhandling.html#services-sqs-backoff-strategy), potentially impacting performance. For these scenarios, you can set `POWERTOOLS_BATCH_THROW_ON_FULL_BATCH_FAILURE = false`, or the equivalent on either the `BatchProcessor` decorator or on the `ProcessingOptions` object. See examples below. ``` [BatchProcessor( RecordHandler = typeof(CustomSqsRecordHandler), ThrowOnFullBatchFailure = false)] public BatchItemFailuresResponse HandlerUsingAttribute(SQSEvent _) { return SqsBatchProcessor.Result.BatchItemFailuresResponse; } ``` ``` public async Task HandlerUsingUtility(SQSEvent sqsEvent) { var result = await SqsBatchProcessor.Instance.ProcessAsync(sqsEvent, RecordHandler.From(x => { // Inline handling of SQS message... }), new ProcessingOptions { ThrowOnFullBatchFailure = false }); return result.BatchItemFailuresResponse; } ``` #### Extending BatchProcessor You might want to bring custom logic to the existing `BatchProcessor` to slightly override how we handle successes and failures. For these scenarios, you can create a class that inherits from `BatchProcessor` (`SqsBatchProcessor`, `DynamoDbStreamBatchProcessor`, `KinesisEventBatchProcessor`) and quickly override `ProcessAsync` and `HandleRecordFailureAsync` methods: - **`ProcessAsync()`** – Keeps track of successful batch records - **`HandleRecordFailureAsync()`** – Keeps track of failed batch records Example Let's suppose you'd like to add a metric named `BatchRecordFailures` for each batch record that failed processing. And also override the default error handling policy to stop on first item failure. ``` public class CustomDynamoDbStreamBatchProcessor : DynamoDbStreamBatchProcessor { public override async Task> ProcessAsync(DynamoDBEvent @event, IRecordHandler recordHandler, ProcessingOptions processingOptions) { ProcessingResult = new ProcessingResult(); // Prepare batch records (order is preserved) var batchRecords = GetRecordsFromEvent(@event).Select(x => new KeyValuePair(GetRecordId(x), x)) .ToArray(); // We assume all records fail by default to avoid loss of data var failureBatchRecords = batchRecords.Select(x => new KeyValuePair>(x.Key, new RecordFailure { Exception = new UnprocessedRecordException($"Record: '{x.Key}' has not been processed."), Record = x.Value })); // Override to fail on first failure var errorHandlingPolicy = BatchProcessorErrorHandlingPolicy.StopOnFirstBatchItemFailure; var successRecords = new Dictionary>(); var failureRecords = new Dictionary>(failureBatchRecords); try { foreach (var pair in batchRecords) { var (recordId, record) = pair; try { var result = await HandleRecordAsync(record, recordHandler, CancellationToken.None); failureRecords.Remove(recordId, out _); successRecords.TryAdd(recordId, new RecordSuccess { Record = record, RecordId = recordId, HandlerResult = result }); } catch (Exception ex) { // Capture exception failureRecords[recordId] = new RecordFailure { Exception = new RecordProcessingException( $"Failed processing record: '{recordId}'. See inner exception for details.", ex), Record = record, RecordId = recordId }; Metrics.AddMetric("BatchRecordFailures", 1, MetricUnit.Count); try { // Invoke hook await HandleRecordFailureAsync(record, ex); } catch { // NOOP } // Check if we should stop record processing on first error // ReSharper disable once ConditionIsAlwaysTrueOrFalse if (errorHandlingPolicy == BatchProcessorErrorHandlingPolicy.StopOnFirstBatchItemFailure) { // This causes the loop's (inner) cancellation token to be cancelled for all operations already scheduled internally throw new CircuitBreakerException( "Error handling policy is configured to stop processing on first batch item failure. See inner exception for details.", ex); } } } } catch (Exception ex) when (ex is CircuitBreakerException or OperationCanceledException) { // NOOP } ProcessingResult.BatchRecords.AddRange(batchRecords.Select(x => x.Value)); ProcessingResult.BatchItemFailuresResponse.BatchItemFailures.AddRange(failureRecords.Select(x => new BatchItemFailuresResponse.BatchItemFailure { ItemIdentifier = x.Key })); ProcessingResult.FailureRecords.AddRange(failureRecords.Values); ProcessingResult.SuccessRecords.AddRange(successRecords.Values); return ProcessingResult; } // ReSharper disable once RedundantOverriddenMember protected override async Task HandleRecordFailureAsync(DynamoDBEvent.DynamodbStreamRecord record, Exception exception) { await base.HandleRecordFailureAsync(record, exception); } } ``` ## Testing your code As there is no external calls, you can unit test your code with `BatchProcessor` quite easily. ``` [Fact] public Task Sqs_Handler_Using_Attribute() { var request = new SQSEvent { Records = TestHelper.SqsMessages }; var function = new HandlerFunction(); var response = function.HandlerUsingAttribute(request); Assert.Equal(2, response.BatchItemFailures.Count); Assert.Equal("2", response.BatchItemFailures[0].ItemIdentifier); Assert.Equal("4", response.BatchItemFailures[1].ItemIdentifier); return Task.CompletedTask; } ``` ``` [BatchProcessor(RecordHandler = typeof(CustomSqsRecordHandler))] public BatchItemFailuresResponse HandlerUsingAttribute(SQSEvent _) { return SqsBatchProcessor.Result.BatchItemFailuresResponse; } ``` ``` public class CustomSqsRecordHandler : ISqsRecordHandler { public async Task HandleAsync(SQSEvent.SQSMessage record, CancellationToken cancellationToken) { var product = JsonSerializer.Deserialize(record.Body); if (product.GetProperty("Id").GetInt16() == 4) { throw new ArgumentException("Error on 4"); } return await Task.FromResult(RecordHandlerResult.None); } } ``` ``` internal static List SqsMessages => new() { new SQSEvent.SQSMessage { MessageId = "1", Body = "{\"Id\":1,\"Name\":\"product-4\",\"Price\":14}", EventSourceArn = "arn:aws:sqs:us-east-2:123456789012:my-queue" }, new SQSEvent.SQSMessage { MessageId = "2", Body = "fail", EventSourceArn = "arn:aws:sqs:us-east-2:123456789012:my-queue" }, new SQSEvent.SQSMessage { MessageId = "3", Body = "{\"Id\":3,\"Name\":\"product-4\",\"Price\":14}", EventSourceArn = "arn:aws:sqs:us-east-2:123456789012:my-queue" }, new SQSEvent.SQSMessage { MessageId = "4", Body = "{\"Id\":4,\"Name\":\"product-4\",\"Price\":14}", EventSourceArn = "arn:aws:sqs:us-east-2:123456789012:my-queue" }, new SQSEvent.SQSMessage { MessageId = "5", Body = "{\"Id\":5,\"Name\":\"product-4\",\"Price\":14}", EventSourceArn = "arn:aws:sqs:us-east-2:123456789012:my-queue" }, }; ``` The idempotency utility provides a simple solution to convert your Lambda functions into idempotent operations which are safe to retry. ## Key features - Prevent Lambda handler function from executing more than once on the same event payload during a time window - Ensure Lambda handler returns the same result when called with the same payload - Select a subset of the event as the idempotency key using [JMESPath](https://jmespath.org/) expressions - Set a time window in which records with the same payload should be considered duplicates - Expires in-progress executions if the Lambda function times out halfway through - Ahead-of-Time compilation to native code support [AOT](https://docs.aws.amazon.com/lambda/latest/dg/dotnet-native-aot.html) from version 1.3.0 ## Terminology The property of idempotency means that an operation does not cause additional side effects if it is called more than once with the same input parameters. **Idempotent operations will return the same result when they are called multiple times with the same parameters**. This makes idempotent operations safe to retry. [Read more](https://aws.amazon.com/builders-library/making-retries-safe-with-idempotent-APIs/) about idempotency. **Idempotency key** is a hash representation of either the entire event or a specific configured subset of the event, and invocation results are **JSON serialized** and stored in your persistence storage layer. **Idempotency record** is the data representation of an idempotent request saved in your preferred storage layer. We use it to coordinate whether a request is idempotent, whether it's still valid or expired based on timestamps, etc. ``` classDiagram direction LR class DataRecord { string IdempotencyKey DataRecordStatus Status long ExpiryTimestamp long InProgressExpiryTimestamp string ResponseData string PayloadHash } class Status { <> INPROGRESS COMPLETED EXPIRED } DataRecord -- Status ``` *Idempotency record representation* ## Getting started ### Installation You should install with NuGet: ``` Install-Package AWS.Lambda.Powertools.Idempotency ``` Or via the .NET Core command line interface: ``` dotnet add package AWS.Lambda.Powertools.Idempotency ``` ### IAM Permissions Your Lambda function IAM Role must have `dynamodb:GetItem`, `dynamodb:PutItem`, `dynamodb:UpdateItem` and `dynamodb:DeleteItem` IAM permissions before using this feature. Note If you're using our example [AWS Serverless Application Model (SAM)](#required-resources), [AWS Cloud Development Kit (CDK)](#required-resources), or [Terraform](#required-resources) it already adds the required permissions. ### Required resources Before getting started, you need to create a persistent storage layer where the idempotency utility can store its state - your Lambda functions will need read and write access to it. As of now, Amazon DynamoDB is the only supported persistent storage layer, so you'll need to create a table first. **Default table configuration** If you're not [changing the default configuration for the DynamoDB persistence layer](#dynamodbpersistencestore), this is the expected default configuration: | Configuration | Value | Notes | | --- | --- | --- | | Partition key | `id` | | | TTL attribute name | `expiration` | This can only be configured after your table is created if you're using AWS Console | Tip: You can share a single state table for all functions You can reuse the same DynamoDB table to store idempotency state. We add your function name in addition to the idempotency key as a hash key. ``` Resources: IdempotencyTable: Type: AWS::DynamoDB::Table Properties: AttributeDefinitions: - AttributeName: id AttributeType: S KeySchema: - AttributeName: id KeyType: HASH TimeToLiveSpecification: AttributeName: expiration Enabled: true BillingMode: PAY_PER_REQUEST IdempotencyFunction: Type: AWS::Serverless::Function Properties: CodeUri: Function Handler: HelloWorld::HelloWorld.Function::FunctionHandler Policies: - DynamoDBCrudPolicy: TableName: !Ref IdempotencyTable Environment: Variables: IDEMPOTENCY_TABLE: !Ref IdempotencyTable ``` Warning: Large responses with DynamoDB persistence layer When using this utility with DynamoDB, your function's responses must be [smaller than 400KB](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Limits.html#limits-items). Larger items cannot be written to DynamoDB and will cause exceptions. Info: DynamoDB Each function invocation will generally make 2 requests to DynamoDB. If the result returned by your Lambda is less than 1kb, you can expect 2 WCUs per invocation. For retried invocations, you will see 1WCU and 1RCU. Review the [DynamoDB pricing documentation](https://aws.amazon.com/dynamodb/pricing/) to estimate the cost. ### Idempotent attribute You can quickly start by configuring `Idempotency` and using it with the `Idempotent` attribute on your Lambda function. Important Initialization and configuration of the `Idempotency` must be performed outside the handler, preferably in the constructor. ``` public class Function { public Function() { Idempotency.Configure(builder => builder.UseDynamoDb("idempotency_table")); } [Idempotent] public Task FunctionHandler(string input, ILambdaContext context) { return Task.FromResult(input.ToUpper()); } } ``` #### Idempotent attribute on another method You can use the `Idempotent` attribute for any .NET function, not only the Lambda handlers. When using `Idempotent` attribute on another method, you must tell which parameter in the method signature has the data we should use: - If the method only has one parameter, it will be used by default. - If there are 2 or more parameters, you must set the `IdempotencyKey` attribute on the parameter to use. The parameter must be serializable in JSON. We use `System.Text.Json` internally to (de)serialize objects ``` public class Function { public Function() { Idempotency.Configure(builder => builder.UseDynamoDb("idempotency_table")); } public Task FunctionHandler(string input, ILambdaContext context) { MyInternalMethod("hello", "world") return Task.FromResult(input.ToUpper()); } [Idempotent] private string MyInternalMethod(string argOne, [IdempotencyKey] string argTwo) { return "something"; } } ``` ### Choosing a payload subset for idempotency Tip: Dealing with always changing payloads When dealing with an elaborate payload (API Gateway request for example), where parts of the payload always change, you should configure the **`EventKeyJmesPath`**. Use [`IdempotencyConfig`](#customizing-the-default-behavior) to instruct the Idempotent annotation to only use a portion of your payload to verify whether a request is idempotent, and therefore it should not be retried. > **Payment scenario** In this example, we have a Lambda handler that creates a payment for a user subscribing to a product. We want to ensure that we don't accidentally charge our customer by subscribing them more than once. Imagine the function executes successfully, but the client never receives the response due to a connection issue. It is safe to retry in this instance, as the idempotent decorator will return a previously saved response. **What we want here** is to instruct Idempotency to use `user_id` and `product_id` fields from our incoming payload as our idempotency key. If we were to treat the entire request as our idempotency key, a simple HTTP header change would cause our customer to be charged twice. Deserializing JSON strings in payloads for increased accuracy. The payload extracted by the `EventKeyJmesPath` is treated as a string by default, so will be sensitive to differences in whitespace even when the JSON payload itself is identical. To alter this behaviour, you can use the JMESPath built-in function `powertools_json()` to treat the payload as a JSON object rather than a string. ``` Idempotency.Configure(builder => builder .WithOptions(optionsBuilder => optionsBuilder.WithEventKeyJmesPath("powertools_json(Body).[\"user_id\", \"product_id\"]")) .UseDynamoDb("idempotency_table")); ``` ``` { "version": "2.0", "routeKey": "ANY /createpayment", "rawPath": "/createpayment", "rawQueryString": "", "headers": { "Header1": "value1", "Header2": "value2" }, "requestContext": { "accountId": "123456789012", "apiId": "api-id", "domainName": "id.execute-api.us-east-1.amazonaws.com", "domainPrefix": "id", "http": { "method": "POST", "path": "/createpayment", "protocol": "HTTP/1.1", "sourceIp": "ip", "userAgent": "agent" }, "requestId": "id", "routeKey": "ANY /createpayment", "stage": "$default", "time": "10/Feb/2021:13:40:43 +0000", "timeEpoch": 1612964443723 }, "body": "{\"user_id\":\"xyz\",\"product_id\":\"123456789\"}", "isBase64Encoded": false } ``` ### Custom key prefix By default, the idempotency key is prefixed with `[ClassName].[DecoratedMethodName]#[PayloadHash]`. You can customize this prefix by setting the `KeyPrefix` property in the Idempotency decorator: ``` public class Function { public Function() { var tableName = Environment.GetEnvironmentVariable("IDEMPOTENCY_TABLE_NAME"); Idempotency.Configure(builder => builder.UseDynamoDb(tableName)); } [Idempotent(KeyPrefix = "MyCustomKeyPrefix")] public APIGatewayProxyResponse FunctionHandler(APIGatewayProxyRequest apigwProxyEvent, ILambdaContext context) { return TestHelper.TestMethod(apigwProxyEvent); } } ``` ### Lambda timeouts Note This is automatically done when you decorate your Lambda handler with [Idempotent attribute](#idempotent-attribute). To prevent against extended failed retries when a [Lambda function times out](https://aws.amazon.com/premiumsupport/knowledge-center/lambda-verify-invocation-timeouts/), Powertools for AWS Lambda (.NET) calculates and includes the remaining invocation available time as part of the idempotency record. Example If a second invocation happens **after** this timestamp, and the record is marked as `INPROGRESS`, we will execute the invocation again as if it was in the `EXPIRED` state (e.g, `Expired` field elapsed). This means that if an invocation expired during execution, it will be quickly executed again on the next retry. Important If you are only using the [Idempotent attribute](#Idempotent-attribute-on-another-method) to guard isolated parts of your code, you must use `RegisterLambdaContext` available in the `Idempotency` static class to benefit from this protection. Here is an example on how you register the Lambda context in your handler: ``` public class Function { public Function() { Idempotency.Configure(builder => builder.UseDynamoDb("idempotency_table")); } public Task FunctionHandler(string input, ILambdaContext context) { Idempotency.RegisterLambdaContext(context); MyInternalMethod("hello", "world") return Task.FromResult(input.ToUpper()); } [Idempotent] private string MyInternalMethod(string argOne, [IdempotencyKey] string argTwo) { return "something"; } } ``` ### Handling exceptions If you are using the `Idempotent` attribute on your Lambda handler or any other method, any unhandled exceptions that are thrown during the code execution will cause **the record in the persistence layer to be deleted**. This means that new invocations will execute your code again despite having the same payload. If you don't want the record to be deleted, you need to catch exceptions within the idempotent function and return a successful response. Warning **We will throw an `IdempotencyPersistenceLayerException`** if any of the calls to the persistence layer fail unexpectedly. As this happens outside the scope of your decorated function, you are not able to catch it. ``` sequenceDiagram participant Client participant Lambda participant Persistence Layer Client->>Lambda: Invoke (event) Lambda->>Persistence Layer: Get or set (id=event.search(payload)) activate Persistence Layer Note right of Persistence Layer: Locked during this time. Prevents multiple
Lambda invocations with the same
payload running concurrently. Lambda--xLambda: Call handler (event).
Raises exception Lambda->>Persistence Layer: Delete record (id=event.search(payload)) deactivate Persistence Layer Lambda-->>Client: Return error response ``` *Idempotent sequence exception* If you are using `Idempotent` attribute on another method, any unhandled exceptions that are raised *inside* the decorated function will cause the record in the persistence layer to be deleted, and allow the function to be executed again if retried. If an Exception is raised *outside* the scope of the decorated method and after your method has been called, the persistent record will not be affected. In this case, idempotency will be maintained for your decorated function. Example: ``` public class Function { public Function() { Idempotency.Configure(builder => builder.UseDynamoDb("idempotency_table")); } public Task FunctionHandler(string input, ILambdaContext context) { Idempotency.RegisterLambdaContext(context); // If an exception is thrown here, no idempotent record will ever get created as the // idempotent method does not get called MyInternalMethod("hello", "world") // This exception will not cause the idempotent record to be deleted, since it // happens after the decorated method has been successfully called throw new Exception(); } [Idempotent] private string MyInternalMethod(string argOne, [IdempotencyKey] string argTwo) { return "something"; } } ``` ### Idempotency request flow The following sequence diagrams explain how the Idempotency feature behaves under different scenarios. #### Successful request ``` sequenceDiagram participant Client participant Lambda participant Persistence Layer alt initial request Client->>Lambda: Invoke (event) Lambda->>Persistence Layer: Get or set idempotency_key=hash(payload) activate Persistence Layer Note over Lambda,Persistence Layer: Set record status to INPROGRESS.
Prevents concurrent invocations
with the same payload Lambda-->>Lambda: Call your function Lambda->>Persistence Layer: Update record with result deactivate Persistence Layer Persistence Layer-->>Persistence Layer: Update record Note over Lambda,Persistence Layer: Set record status to COMPLETE.
New invocations with the same payload
now return the same result Lambda-->>Client: Response sent to client else retried request Client->>Lambda: Invoke (event) Lambda->>Persistence Layer: Get or set idempotency_key=hash(payload) activate Persistence Layer Persistence Layer-->>Lambda: Already exists in persistence layer. deactivate Persistence Layer Note over Lambda,Persistence Layer: Record status is COMPLETE and not expired Lambda-->>Client: Same response sent to client end ``` *Idempotent successful request* #### Successful request with cache enabled [In-memory cache is disabled by default](#using-in-memory-cache). ``` sequenceDiagram participant Client participant Lambda participant Persistence Layer alt initial request Client->>Lambda: Invoke (event) Lambda->>Persistence Layer: Get or set idempotency_key=hash(payload) activate Persistence Layer Note over Lambda,Persistence Layer: Set record status to INPROGRESS.
Prevents concurrent invocations
with the same payload Lambda-->>Lambda: Call your function Lambda->>Persistence Layer: Update record with result deactivate Persistence Layer Persistence Layer-->>Persistence Layer: Update record Note over Lambda,Persistence Layer: Set record status to COMPLETE.
New invocations with the same payload
now return the same result Lambda-->>Lambda: Save record and result in memory Lambda-->>Client: Response sent to client else retried request Client->>Lambda: Invoke (event) Lambda-->>Lambda: Get idempotency_key=hash(payload) Note over Lambda,Persistence Layer: Record status is COMPLETE and not expired Lambda-->>Client: Same response sent to client end ``` *Idempotent successful request cached* #### Expired idempotency records ``` sequenceDiagram participant Client participant Lambda participant Persistence Layer alt initial request Client->>Lambda: Invoke (event) Lambda->>Persistence Layer: Get or set idempotency_key=hash(payload) activate Persistence Layer Note over Lambda,Persistence Layer: Set record status to INPROGRESS.
Prevents concurrent invocations
with the same payload Lambda-->>Lambda: Call your function Lambda->>Persistence Layer: Update record with result deactivate Persistence Layer Persistence Layer-->>Persistence Layer: Update record Note over Lambda,Persistence Layer: Set record status to COMPLETE.
New invocations with the same payload
now return the same result Lambda-->>Client: Response sent to client else retried request Client->>Lambda: Invoke (event) Lambda->>Persistence Layer: Get or set idempotency_key=hash(payload) activate Persistence Layer Persistence Layer-->>Lambda: Already exists in persistence layer. deactivate Persistence Layer Note over Lambda,Persistence Layer: Record status is COMPLETE but expired hours ago loop Repeat initial request process Note over Lambda,Persistence Layer: 1. Set record to INPROGRESS,
2. Call your function,
3. Set record to COMPLETE end Lambda-->>Client: Same response sent to client end ``` *Previous Idempotent request expired* #### Concurrent identical in-flight requests ``` sequenceDiagram participant Client participant Lambda participant Persistence Layer Client->>Lambda: Invoke (event) Lambda->>Persistence Layer: Get or set idempotency_key=hash(payload) activate Persistence Layer Note over Lambda,Persistence Layer: Set record status to INPROGRESS.
Prevents concurrent invocations
with the same payload par Second request Client->>Lambda: Invoke (event) Lambda->>Persistence Layer: Get or set idempotency_key=hash(payload) Lambda--xLambda: IdempotencyAlreadyInProgressError Lambda->>Client: Error sent to client if unhandled end Lambda-->>Lambda: Call your function Lambda->>Persistence Layer: Update record with result deactivate Persistence Layer Persistence Layer-->>Persistence Layer: Update record Note over Lambda,Persistence Layer: Set record status to COMPLETE.
New invocations with the same payload
now return the same result Lambda-->>Client: Response sent to client ``` *Concurrent identical in-flight requests* #### Lambda request timeout ``` sequenceDiagram participant Client participant Lambda participant Persistence Layer alt initial request Client->>Lambda: Invoke (event) Lambda->>Persistence Layer: Get or set idempotency_key=hash(payload) activate Persistence Layer Note over Lambda,Persistence Layer: Set record status to INPROGRESS.
Prevents concurrent invocations
with the same payload Lambda-->>Lambda: Call your function Note right of Lambda: Time out Lambda--xLambda: Time out error Lambda-->>Client: Return error response deactivate Persistence Layer else retry after Lambda timeout elapses Client->>Lambda: Invoke (event) Lambda->>Persistence Layer: Get or set idempotency_key=hash(payload) activate Persistence Layer Note over Lambda,Persistence Layer: Set record status to INPROGRESS.
Reset in_progress_expiry attribute Lambda-->>Lambda: Call your function Lambda->>Persistence Layer: Update record with result deactivate Persistence Layer Persistence Layer-->>Persistence Layer: Update record Lambda-->>Client: Response sent to client end ``` *Idempotent request during and after Lambda timeouts* #### Optional idempotency key ``` sequenceDiagram participant Client participant Lambda participant Persistence Layer alt request with idempotency key Client->>Lambda: Invoke (event) Lambda->>Persistence Layer: Get or set idempotency_key=hash(payload) activate Persistence Layer Note over Lambda,Persistence Layer: Set record status to INPROGRESS.
Prevents concurrent invocations
with the same payload Lambda-->>Lambda: Call your function Lambda->>Persistence Layer: Update record with result deactivate Persistence Layer Persistence Layer-->>Persistence Layer: Update record Note over Lambda,Persistence Layer: Set record status to COMPLETE.
New invocations with the same payload
now return the same result Lambda-->>Client: Response sent to client else request(s) without idempotency key Client->>Lambda: Invoke (event) Note over Lambda: Idempotency key is missing Note over Persistence Layer: Skips any operation to fetch, update, and delete Lambda-->>Lambda: Call your function Lambda-->>Client: Response sent to client end ``` *Optional idempotency key* ## Advanced ### Persistence stores #### DynamoDBPersistenceStore This persistence store is built-in, and you can either use an existing DynamoDB table or create a new one dedicated for idempotency state (recommended). Use the builder to customize the table structure: ``` new DynamoDBPersistenceStoreBuilder() .WithTableName("TABLE_NAME") .WithKeyAttr("idempotency_key") .WithExpiryAttr("expires_at") .WithStatusAttr("current_status") .WithDataAttr("result_data") .WithValidationAttr("validation_key") .WithInProgressExpiryAttr("in_progress_expires_at") .Build() ``` When using DynamoDB as a persistence layer, you can alter the attribute names by passing these parameters when initializing the persistence layer: | Parameter | Required | Default | Description | | --- | --- | --- | --- | | **TableName** | Y | | Table name to store state | | **KeyAttr** | | `id` | Partition key of the table. Hashed representation of the payload (unless **SortKeyAttr** is specified) | | **ExpiryAttr** | | `expiration` | Unix timestamp of when record expires | | **InProgressExpiryAttr** | | `in_progress_expiration` | Unix timestamp of when record expires while in progress (in case of the invocation times out) | | **StatusAttr** | | `status` | Stores status of the Lambda execution during and after invocation | | **DataAttr** | | `data` | Stores results of successfully idempotent methods | | **ValidationAttr** | | `validation` | Hashed representation of the parts of the event used for validation | | **SortKeyAttr** | | | Sort key of the table (if table is configured with a sort key). | | **StaticPkValue** | | `idempotency#{LAMBDA_FUNCTION_NAME}` | Static value to use as the partition key. Only used when **SortKeyAttr** is set. | ### Customizing the default behavior Idempotency behavior can be further configured with **`IdempotencyOptions`** using a builder: ``` new IdempotencyOptionsBuilder() .WithEventKeyJmesPath("id") .WithPayloadValidationJmesPath("paymentId") .WithThrowOnNoIdempotencyKey(true) .WithExpiration(TimeSpan.FromMinutes(1)) .WithUseLocalCache(true) .WithHashFunction("MD5") .Build(); ``` These are the available options for further configuration: | Parameter | Default | Description | | --- | --- | --- | | **EventKeyJMESPath** | `""` | JMESPath expression to extract the idempotency key from the event record. | | **PayloadValidationJMESPath** | `""` | JMESPath expression to validate whether certain parameters have changed in the event | | **ThrowOnNoIdempotencyKey** | `False` | Throw exception if no idempotency key was found in the request | | **ExpirationInSeconds** | 3600 | The number of seconds to wait before a record is expired | | **UseLocalCache** | `false` | Whether to locally cache idempotency results (LRU cache) | | **HashFunction** | `MD5` | Algorithm to use for calculating hashes, as supported by `System.Security.Cryptography.HashAlgorithm` (eg. SHA1, SHA-256, ...) | These features are detailed below. ### Handling concurrent executions with the same payload This utility will throw an **`IdempotencyAlreadyInProgressException`** if we receive **multiple invocations with the same payload while the first invocation hasn't completed yet**. Info If you receive `IdempotencyAlreadyInProgressException`, you can safely retry the operation. This is a locking mechanism for correctness. Since we don't know the result from the first invocation yet, we can't safely allow another concurrent execution. ### Using in-memory cache **By default, in-memory local caching is disabled**, to avoid using memory in an unpredictable way. Warning Be sure to configure the Lambda memory according to the number of records and the potential size of each record. You can enable it as seen before with: ``` new IdempotencyOptionsBuilder() .WithUseLocalCache(true) .Build() ``` When enabled, we cache a maximum of 255 records in each Lambda execution environment Note: This in-memory cache is local to each Lambda execution environment This means it will be effective in cases where your function's concurrency is low in comparison to the number of "retry" invocations with the same payload, because cache might be empty. ### Expiring idempotency records Note By default, we expire idempotency records after **an hour** (3600 seconds). In most cases, it is not desirable to store the idempotency records forever. Rather, you want to guarantee that the same payload won't be executed within a period of time. You can change this window with the **`ExpirationInSeconds`** parameter: ``` new IdempotencyOptionsBuilder() .WithExpiration(TimeSpan.FromMinutes(5)) .Build() ``` Records older than 5 minutes will be marked as expired, and the Lambda handler will be executed normally even if it is invoked with a matching payload. Note: DynamoDB time-to-live field This utility uses **`expiration`** as the TTL field in DynamoDB, as [demonstrated in the SAM example earlier](#required-resources). ### Payload validation Question: What if your function is invoked with the same payload except some outer parameters have changed? Example: A payment transaction for a given productID was requested twice for the same customer, **however the amount to be paid has changed in the second transaction**. By default, we will return the same result as it returned before, however in this instance it may be misleading; we provide a fail fast payload validation to address this edge case. With **`PayloadValidationJMESPath`**, you can provide an additional JMESPath expression to specify which part of the event body should be validated against previous idempotent invocations ``` Idempotency.Configure(builder => builder .WithOptions(optionsBuilder => optionsBuilder .WithEventKeyJmesPath("[userDetail, productId]") .WithPayloadValidationJmesPath("amount")) .UseDynamoDb("TABLE_NAME")); ``` ``` { "userDetail": { "username": "User1", "user_email": "user@example.com" }, "productId": 1500, "charge_type": "subscription", "amount": 500 } ``` ``` { "userDetail": { "username": "User1", "user_email": "user@example.com" }, "productId": 1500, "charge_type": "subscription", "amount": 1 } ``` In this example, the **`userDetail`** and **`productId`** keys are used as the payload to generate the idempotency key, as per **`EventKeyJMESPath`** parameter. Note If we try to send the same request but with a different amount, we will raise **`IdempotencyValidationException`**. Without payload validation, we would have returned the same result as we did for the initial request. Since we're also returning an amount in the response, this could be quite confusing for the client. By using **`withPayloadValidationJMESPath("amount")`**, we prevent this potentially confusing behavior and instead throw an Exception. ### Making idempotency key required If you want to enforce that an idempotency key is required, you can set **`ThrowOnNoIdempotencyKey`** to `True`. This means that we will throw **`IdempotencyKeyException`** if the evaluation of **`EventKeyJMESPath`** is `null`. ``` public App() { Idempotency.Configure(builder => builder .WithOptions(optionsBuilder => optionsBuilder // Requires "user"."uid" and "orderId" to be present .WithEventKeyJmesPath("[user.uid, orderId]") .WithThrowOnNoIdempotencyKey(true)) .UseDynamoDb("TABLE_NAME")); } [Idempotent] public Task FunctionHandler(Order input, ILambdaContext context) { // ... } ``` ``` { "user": { "uid": "BB0D045C-8878-40C8-889E-38B3CB0A61B1", "name": "Foo" }, "orderId": 10000 } ``` Notice that `orderId` is now accidentally within `user` key ``` { "user": { "uid": "DE0D000E-1234-10D1-991E-EAC1DD1D52C8", "name": "Joe Bloggs", "orderId": 10000 }, } ``` ### Customizing DynamoDB configuration When creating the `DynamoDBPersistenceStore`, you can set a custom [`AmazonDynamoDBClient`](https://docs.aws.amazon.com/sdkfornet1/latest/apidocs/html/T_Amazon_DynamoDB_AmazonDynamoDBClient.htm) if you need to customize the configuration: ``` public Function() { AmazonDynamoDBClient customClient = new AmazonDynamoDBClient(RegionEndpoint.APSouth1); Idempotency.Configure(builder => builder.UseDynamoDb(storeBuilder => storeBuilder. WithTableName("TABLE_NAME") .WithDynamoDBClient(customClient) )); } ``` ### Using a DynamoDB table with a composite primary key When using a composite primary key table (hash+range key), use `SortKeyAttr` parameter when initializing your persistence store. With this setting, we will save the idempotency key in the sort key instead of the primary key. By default, the primary key will now be set to `idempotency#{LAMBDA_FUNCTION_NAME}`. You can optionally set a static value for the partition key using the `StaticPkValue` parameter. ``` Idempotency.Configure(builder => builder.UseDynamoDb(storeBuilder => storeBuilder. WithTableName("TABLE_NAME") .WithSortKeyAttr("sort_key") )); ``` Data would then be stored in DynamoDB like this: | id | sort_key | expiration | status | data | | --- | --- | --- | --- | --- | | idempotency#MyLambdaFunction | 1e956ef7da78d0cb890be999aecc0c9e | 1636549553 | COMPLETED | {"id": 12391, "message": "success"} | | idempotency#MyLambdaFunction | 2b2cdb5f86361e97b4383087c1ffdf27 | 1636549571 | COMPLETED | {"id": 527212, "message": "success"} | | idempotency#MyLambdaFunction | f091d2527ad1c78f05d54cc3f363be80 | 1636549585 | IN_PROGRESS | | ## AOT Support Native AOT trims your application code as part of the compilation to ensure that the binary is as small as possible. .NET 8 for Lambda provides improved trimming support compared to previous versions of .NET. ### WithJsonSerializationContext() To use Idempotency utility with AOT support you first need to add `WithJsonSerializationContext()` to your `Idempotency` configuration. This ensures that when serializing your payload, the utility uses the correct serialization context. In the example below, we use the default `LambdaFunctionJsonSerializerContext`: ``` Idempotency.Configure(builder => builder.WithJsonSerializationContext(LambdaFunctionJsonSerializerContext.Default))); ``` Full example: ``` public static class Function { private static async Task Main() { var tableName = Environment.GetEnvironmentVariable("IDEMPOTENCY_TABLE_NAME"); Idempotency.Configure(builder => builder .WithJsonSerializationContext(LambdaFunctionJsonSerializerContext.Default) .WithOptions(optionsBuilder => optionsBuilder .WithExpiration(TimeSpan.FromHours(1))) .UseDynamoDb(storeBuilder => storeBuilder .WithTableName(tableName) )); Func handler = FunctionHandler; await LambdaBootstrapBuilder.Create(handler, new SourceGeneratorLambdaJsonSerializer()) .Build() .RunAsync(); } [Idempotent] public static APIGatewayProxyResponse FunctionHandler(APIGatewayProxyRequest apigwProxyEvent, ILambdaContext context) { return new APIGatewayProxyResponse { Body = JsonSerializer.Serialize(response, typeof(Response), LambdaFunctionJsonSerializerContext.Default), StatusCode = 200, Headers = new Dictionary { { "Content-Type", "application/json" } } }; } } [JsonSerializable(typeof(APIGatewayProxyRequest))] [JsonSerializable(typeof(APIGatewayProxyResponse))] [JsonSerializable(typeof(Response))] public partial class LambdaFunctionJsonSerializerContext : JsonSerializerContext { } ``` ## Testing your code The idempotency utility provides several routes to test your code. You can check our Integration tests which use [TestContainers](https://testcontainers.com/modules/dynamodb/) with a local DynamoDB instance to test the idempotency utility. Or our end-to-end tests which use the AWS SDK to interact with a real DynamoDB table. ### Disabling the idempotency utility When testing your code, you may wish to disable the idempotency logic altogether and focus on testing your business logic. To do this, you can set the environment variable `POWERTOOLS_IDEMPOTENCY_DISABLED` to true. ## Extra resources If you're interested in a deep dive on how Amazon uses idempotency when building our APIs, check out [this article](https://aws.amazon.com/builders-library/making-retries-safe-with-idempotent-APIs/). Tip JMESPath is a query language for JSON used by AWS CLI, AWS Python SDK, and Powertools for AWS Lambda. Built-in JMESPath functions to easily deserialize common encoded JSON payloads in Lambda functions. ## Key features - Deserialize JSON from JSON strings, base64, and compressed data - Use JMESPath to extract and combine data recursively - Provides commonly used JMESPath expression with popular event sources ## Getting started Tip All examples shared in this documentation are available within the [project repository](https://github.com/aws-powertools/powertools-lambda-dotnet/tree/develop/libraries/tests/AWS.Lambda.Powertools.JMESPath.Tests/JmesPathExamples.cs). You might have events that contains encoded JSON payloads as string, base64, or even in compressed format. It is a common use case to decode and extract them partially or fully as part of your Lambda function invocation. Terminology **Envelope** is the terminology we use for the **JMESPath expression** to extract your JSON object from your data input. We might use those two terms interchangeably. ### Extracting data You can use the `JsonTransformer.Transform` function with any [JMESPath expression](https://jmespath.org/tutorial.html). Tip Another common use case is to fetch deeply nested data, filter, flatten, and more. ``` var transformer = JsonTransformer.Parse("powertools_json(body).customerId"); using var result = transformer.Transform(doc.RootElement); Logger.LogInformation(result.RootElement.GetRawText()); // "dd4649e6-2484-4993-acb8-0f9123103394" ``` ``` { "body": "{\"customerId\":\"dd4649e6-2484-4993-acb8-0f9123103394\"}", "deeply_nested": [ { "some_data": [ 1, 2, 3 ] } ] } ``` ### Built-in envelopes We provide built-in envelopes for popular AWS Lambda event sources to easily decode and/or deserialize JSON objects. | Envelop | JMESPath expression | | --- | --- | | API_GATEWAY_HTTP | powertools_json(body) | | API_GATEWAY_REST | powertools_json(body) | | CLOUDWATCH_LOGS | awslogs.powertools_base64_gzip(data) | powertools_json(@).logEvents[\*] | | KINESIS_DATA_STREAM | Records[\*].kinesis.powertools_json(powertools_base64(data)) | | SNS | Records[\*].Sns.Message | powertools_json(@) | | SQS | Records[\*].powertools_json(body) | Using SNS? If you don't require SNS metadata, enable [raw message delivery](https://docs.aws.amazon.com/sns/latest/dg/sns-large-payload-raw-message-delivery.html). It will reduce multiple payload layers and size, when using SNS in combination with other services (*e.g., SQS, S3, etc*). ## Advanced ### Built-in JMESPath functions You can use our built-in JMESPath functions within your envelope expression. They handle deserialization for common data formats found in AWS Lambda event sources such as JSON strings, base64, and uncompress gzip data. #### powertools_json function Use `powertools_json` function to decode any JSON string anywhere a JMESPath expression is allowed. > **Idempotency scenario** This sample will deserialize the JSON string within the `body` key before [Idempotency](../idempotency/) processes it. ``` Idempotency.Configure(builder => builder .WithOptions(optionsBuilder => optionsBuilder.WithEventKeyJmesPath("powertools_json(Body).[\"user_id\", \"product_id\"]")) .UseDynamoDb("idempotency_table")); ``` ``` { "version": "2.0", "routeKey": "ANY /createpayment", "rawPath": "/createpayment", "rawQueryString": "", "headers": { "Header1": "value1", "Header2": "value2" }, "requestContext": { "accountId": "123456789012", "apiId": "api-id", "domainName": "id.execute-api.us-east-1.amazonaws.com", "domainPrefix": "id", "http": { "method": "POST", "path": "/createpayment", "protocol": "HTTP/1.1", "sourceIp": "ip", "userAgent": "agent" }, "requestId": "id", "routeKey": "ANY /createpayment", "stage": "$default", "time": "10/Feb/2021:13:40:43 +0000", "timeEpoch": 1612964443723 }, "body": "{\"user_id\":\"xyz\",\"product_id\":\"123456789\"}", "isBase64Encoded": false } ``` #### powertools_base64 function Use `powertools_base64` function to decode any base64 data. This sample will decode the base64 value within the `data` key, and deserialize the JSON string before validation. ``` var transformer = JsonTransformer.Parse("powertools_base64(body).customerId"); using var result = transformer.Transform(doc.RootElement); Logger.LogInformation(result.RootElement.GetRawText()); // "dd4649e6-2484-4993-acb8-0f9123103394" ``` ``` { "body": "eyJjdXN0b21lcklkIjoiZGQ0NjQ5ZTYtMjQ4NC00OTkzLWFjYjgtMGY5MTIzMTAzMzk0In0=", "deeply_nested": [ { "some_data": [ 1, 2, 3 ] } ] } ``` #### powertools_base64_gzip function Use `powertools_base64_gzip` function to decompress and decode base64 data. This sample will decompress and decode base64 data from Cloudwatch Logs, then use JMESPath pipeline expression to pass the result for decoding its JSON string. ``` var transformer = JsonTransformer.Parse("powertools_base64_gzip(body).customerId"); using var result = transformer.Transform(doc.RootElement); Logger.LogInformation(result.RootElement.GetRawText()); // "dd4649e6-2484-4993-acb8-0f9123103394" ``` ``` { "body": "H4sIAAAAAAAAA6tWSi4tLsnPTS3yTFGyUkpJMTEzsUw10zUysTDRNbG0NNZNTE6y0DVIszQ0MjY0MDa2NFGqBQCMzDWgNQAAAA==", "deeply_nested": [ { "some_data": [ 1, 2, 3 ] } ] } ``` The Parameters utility provides high-level functionality to retrieve one or multiple parameter values from [AWS Systems Manager Parameter Store](https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-parameter-store.html), [AWS Secrets Manager](https://aws.amazon.com/secrets-manager/), [Amazon DynamoDB](https://aws.amazon.com/dynamodb/), or [AWS AppConfig](https://docs.aws.amazon.com/appconfig/latest/userguide/what-is-appconfig.html). We also provide extensibility to bring your own providers. ## Key features - Retrieve one or multiple parameters from the underlying provider - Cache parameter values for a given amount of time (defaults to 5 seconds) - Transform parameter values from JSON or base 64 encoded strings - Bring your own parameter store provider ## Installation Powertools for AWS Lambda (.NET) are available as NuGet packages. You can install the packages from [NuGet Gallery](https://www.nuget.org/packages?q=AWS+Lambda+Powertools*) or from Visual Studio editor by searching `AWS.Lambda.Powertools*` to see various utilities available. - [AWS.Lambda.Powertools.Parameters](https://www.nuget.org/packages?q=AWS.Lambda.Powertools.Parameters): `dotnet nuget add AWS.Lambda.Powertools.Parameters` **IAM Permissions** This utility requires additional permissions to work as expected. See the table below: | Provider | Function/Method | IAM Permission | | --- | --- | --- | | SSM Parameter Store | `SsmProvider.Get(string)` `SsmProvider.Get(string)` | `ssm:GetParameter` | | SSM Parameter Store | `SsmProvider.GetMultiple(string)` `SsmProvider.GetMultiple(string)` | `ssm:GetParametersByPath` | | SSM Parameter Store | If using **`WithDecryption()`** option | You must add an additional permission `kms:Decrypt` | | Secrets Manager | `SecretsProvider.Get(string)` `SecretsProvider.Get(string)` | `secretsmanager:GetSecretValue` | | DynamoDB | `DynamoDBProvider.Get(string)` `DynamoDBProvider.Get(string)` | `dynamodb:GetItem` | | DynamoDB | `DynamoDBProvider.GetMultiple(string)` `DynamoDBProvider.GetMultiple(string)` | `dynamodb:Query` | | App Config | `AppConfigProvider.Get()` | `appconfig:StartConfigurationSession` `appconfig:GetLatestConfiguration` | ## SSM Parameter Store You can retrieve a single parameter using `SsmProvider.Get()` and pass the key of the parameter. For multiple parameters, you can use `SsmProvider.GetMultiple()` and pass the path to retrieve them all. Alternatively, you can retrieve the instance of provider and configure its underlying SDK client, in order to get data from other regions or use specific credentials. ``` using AWS.Lambda.Powertools.Parameters; using AWS.Lambda.Powertools.Parameters.SimpleSystemsManagement; public class Function { public async Task FunctionHandler (APIGatewayProxyRequest apigProxyEvent, ILambdaContext context) { // Get SSM Provider instance ISsmProvider ssmProvider = ParametersManager.SsmProvider; // Retrieve a single parameter string? value = await ssmProvider .GetAsync("/my/parameter") .ConfigureAwait(false); // Retrieve multiple parameters from a path prefix // This returns a Dictionary with the parameter name as key IDictionary values = await ssmProvider .GetMultipleAsync("/my/path/prefix") .ConfigureAwait(false); } } ``` ``` using AWS.Lambda.Powertools.Parameters; using AWS.Lambda.Powertools.Parameters.SimpleSystemsManagement; public class Function { public async Task FunctionHandler (APIGatewayProxyRequest apigProxyEvent, ILambdaContext context) { // Get SSM Provider instance ISsmProvider ssmProvider = ParametersManager.SsmProvider .ConfigureClient(RegionEndpoint.EUCentral1); // Retrieve a single parameter string? value = await ssmProvider .GetAsync("/my/parameter") .ConfigureAwait(false); // Retrieve multiple parameters from a path prefix // This returns a Dictionary with the parameter name as key IDictionary values = await ssmProvider .GetMultipleAsync("/my/path/prefix") .ConfigureAwait(false); } } ``` ``` using Amazon.SimpleSystemsManagement; using AWS.Lambda.Powertools.Parameters; using AWS.Lambda.Powertools.Parameters.SimpleSystemsManagement; public class Function { public async Task FunctionHandler (APIGatewayProxyRequest apigProxyEvent, ILambdaContext context) { // Create a new instance of client IAmazonSimpleSystemsManagement client = new AmazonSimpleSystemsManagementClient(); // Get SSM Provider instance ISsmProvider ssmProvider = ParametersManager.SsmProvider .UseClient(client); // Retrieve a single parameter string? value = await ssmProvider .GetAsync("/my/parameter") .ConfigureAwait(false); // Retrieve multiple parameters from a path prefix // This returns a Dictionary with the parameter name as key IDictionary values = await ssmProvider .GetMultipleAsync("/my/path/prefix") .ConfigureAwait(false); } } ``` ### Additional arguments The AWS Systems Manager Parameter Store provider supports two additional arguments for the `Get()` and `GetMultiple()` methods: | Option | Default | Description | | --- | --- | --- | | **WithDecryption()** | `False` | Will automatically decrypt the parameter. | | **Recursive()** | `False` | For `GetMultiple()` only, will fetch all parameter values recursively based on a path prefix. | You can create `SecureString` parameters, which are parameters that have a plaintext parameter name and an encrypted parameter value. If you don't use the `WithDecryption()` option, you will get an encrypted value. Read [here](https://docs.aws.amazon.com/kms/latest/developerguide/services-parameter-store.html) about best practices using KMS to secure your parameters. **Example:** ``` using AWS.Lambda.Powertools.Parameters; using AWS.Lambda.Powertools.Parameters.SimpleSystemsManagement; public class Function { public async Task FunctionHandler (APIGatewayProxyRequest apigProxyEvent, ILambdaContext context) { // Get SSM Provider instance ISsmProvider ssmProvider = ParametersManager.SsmProvider; // Retrieve a single parameter string? value = await ssmProvider .WithDecryption() .GetAsync("/my/parameter") .ConfigureAwait(false); // Retrieve multiple parameters from a path prefix // This returns a Dictionary with the parameter name as key IDictionary values = await ssmProvider .Recursive() .GetMultipleAsync("/my/path/prefix") .ConfigureAwait(false); } } ``` ## Secrets Manager For secrets stored in Secrets Manager, use `SecretsProvider`. Alternatively, you can retrieve the instance of provider and configure its underlying SDK client, in order to get data from other regions or use specific credentials. ``` using AWS.Lambda.Powertools.Parameters; using AWS.Lambda.Powertools.Parameters.SecretsManager; public class Function { public async Task FunctionHandler (APIGatewayProxyRequest apigProxyEvent, ILambdaContext context) { // Get Secrets Provider instance ISecretsProvider secretsProvider = ParametersManager.SecretsProvider; // Retrieve a single secret string? value = await secretsProvider .GetAsync("/my/secret") .ConfigureAwait(false); } } ``` ``` using AWS.Lambda.Powertools.Parameters; using AWS.Lambda.Powertools.Parameters.SecretsManager; public class Function { public async Task FunctionHandler (APIGatewayProxyRequest apigProxyEvent, ILambdaContext context) { // Get Secrets Provider instance ISecretsProvider secretsProvider = ParametersManager.SecretsProvider .ConfigureClient(RegionEndpoint.EUCentral1); // Retrieve a single secret string? value = await secretsProvider .GetAsync("/my/secret") .ConfigureAwait(false); } } ``` ``` using Amazon.SecretsManager; using AWS.Lambda.Powertools.Parameters; using AWS.Lambda.Powertools.Parameters.SecretsManager; public class Function { public async Task FunctionHandler (APIGatewayProxyRequest apigProxyEvent, ILambdaContext context) { // Create a new instance of client IAmazonSecretsManager client = new AmazonSecretsManagerClient(); // Get Secrets Provider instance ISecretsProvider secretsProvider = ParametersManager.SecretsProvider .UseClient(client); // Retrieve a single secret string? value = await secretsProvider .GetAsync("/my/secret") .ConfigureAwait(false); } } ``` ## DynamoDB Provider For parameters stored in a DynamoDB table, use `DynamoDBProvider`. **DynamoDB table structure for single parameters** For single parameters, you must use `id` as the [partition key](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.CoreComponents.html#HowItWorks.CoreComponents.PrimaryKey) for that table. Example DynamoDB table with `id` partition key and `value` as attribute | id | value | | --- | --- | | my-parameter | my-value | With this table, `DynamoDBProvider.Get("my-param")` will return `my-value`. ``` using AWS.Lambda.Powertools.Parameters; using AWS.Lambda.Powertools.Parameters.DynamoDB; public class Function { public async Task FunctionHandler (APIGatewayProxyRequest apigProxyEvent, ILambdaContext context) { // Get DynamoDB Provider instance IDynamoDBProvider dynamoDbProvider = ParametersManager.DynamoDBProvider .UseTable("my-table"); // Retrieve a single parameter string? value = await dynamoDbProvider .GetAsync("my-param") .ConfigureAwait(false); } } ``` **DynamoDB table structure for multiple values parameters** You can retrieve multiple parameters sharing the same `id` by having a sort key named `sk`. Example DynamoDB table with `id` primary key, `sk` as sort key`and`value\` as attribute | id | sk | value | | --- | --- | --- | | my-hash-key | param-a | my-value-a | | my-hash-key | param-b | my-value-b | | my-hash-key | param-c | my-value-c | With this table, `DynamoDBProvider.GetMultiple("my-hash-key")` will return a dictionary response in the shape of `sk:value`. ``` using AWS.Lambda.Powertools.Parameters; using AWS.Lambda.Powertools.Parameters.DynamoDB; public class Function { public async Task FunctionHandler (APIGatewayProxyRequest apigProxyEvent, ILambdaContext context) { // Get DynamoDB Provider instance IDynamoDBProvider dynamoDbProvider = ParametersManager.DynamoDBProvider .UseTable("my-table"); // Retrieve a single parameter IDictionary value = await dynamoDbProvider .GetMultipleAsync("my-hash-key") .ConfigureAwait(false); } } ``` ``` { "param-a": "my-value-a", "param-b": "my-value-b", "param-c": "my-value-c" } ``` **Customizing DynamoDBProvider** DynamoDB provider can be customized at initialization to match your table structure: | Parameter | Mandatory | Default | Description | | --- | --- | --- | --- | | **table_name** | **Yes** | *(N/A)* | Name of the DynamoDB table containing the parameter values. | | **key_attr** | No | `id` | Hash key for the DynamoDB table. | | **sort_attr** | No | `sk` | Range key for the DynamoDB table. You don't need to set this if you don't use the `GetMultiple()` method. | | **value_attr** | No | `value` | Name of the attribute containing the parameter value. | ``` using AWS.Lambda.Powertools.Parameters; using AWS.Lambda.Powertools.Parameters.DynamoDB; public class Function { public async Task FunctionHandler (APIGatewayProxyRequest apigProxyEvent, ILambdaContext context) { // Get DynamoDB Provider instance IDynamoDBProvider dynamoDbProvider = ParametersManager.DynamoDBProvider .UseTable ( tableName: "TableName", // DynamoDB table name, Required. primaryKeyAttribute: "id", // Partition Key attribute name, optional, default is 'id' sortKeyAttribute: "sk", // Sort Key attribute name, optional, default is 'sk' valueAttribute: "value" // Value attribute name, optional, default is 'value' ); } } ``` ## App Configurations For application configurations in AWS AppConfig, use `AppConfigProvider`. Alternatively, you can retrieve the instance of provider and configure its underlying SDK client, in order to get data from other regions or use specific credentials. ``` using AWS.Lambda.Powertools.Parameters; using AWS.Lambda.Powertools.Parameters.AppConfig; public class Function { public async Task FunctionHandler (APIGatewayProxyRequest apigProxyEvent, ILambdaContext context) { // Get AppConfig Provider instance IAppConfigProvider appConfigProvider = ParametersManager.AppConfigProvider .DefaultApplication("MyApplicationId") .DefaultEnvironment("MyEnvironmentId") .DefaultConfigProfile("MyConfigProfileId"); // Retrieve a single configuration, latest version IDictionary value = await appConfigProvider .GetAsync() .ConfigureAwait(false); } } ``` ``` using AWS.Lambda.Powertools.Parameters; using AWS.Lambda.Powertools.Parameters.AppConfig; public class Function { public async Task FunctionHandler (APIGatewayProxyRequest apigProxyEvent, ILambdaContext context) { // Get AppConfig Provider instance IAppConfigProvider appConfigProvider = ParametersManager.AppConfigProvider .ConfigureClient(RegionEndpoint.EUCentral1) .DefaultApplication("MyApplicationId") .DefaultEnvironment("MyEnvironmentId") .DefaultConfigProfile("MyConfigProfileId"); // Retrieve a single configuration, latest version IDictionary value = await appConfigProvider .GetAsync() .ConfigureAwait(false); } } ``` **Using AWS AppConfig Feature Flags** Feature flagging is a powerful tool that allows safely pushing out new features in a measured and usually gradual way. AppConfig provider offers helper methods to make it easier to work with feature flags. ``` using AWS.Lambda.Powertools.Parameters; using AWS.Lambda.Powertools.Parameters.AppConfig; public class Function { public async Task FunctionHandler (APIGatewayProxyRequest apigProxyEvent, ILambdaContext context) { // Get AppConfig Provider instance IAppConfigProvider appConfigProvider = ParametersManager.AppConfigProvider .DefaultApplication("MyApplicationId") .DefaultEnvironment("MyEnvironmentId") .DefaultConfigProfile("MyConfigProfileId"); // Check if feature flag is enabled var isFeatureFlagEnabled = await appConfigProvider .IsFeatureFlagEnabledAsync("MyFeatureFlag") .ConfigureAwait(false); if (isFeatureFlagEnabled) { // Retrieve an attribute value of the feature flag var strAttValue = await appConfigProvider .GetFeatureFlagAttributeValueAsync("MyFeatureFlag", "StringAttribute") .ConfigureAwait(false); // Retrieve another attribute value of the feature flag var numberAttValue = await appConfigProvider .GetFeatureFlagAttributeValueAsync("MyFeatureFlag", "NumberAttribute") .ConfigureAwait(false); } } } ``` ## Advanced configuration ### Caching By default, all parameters and their corresponding values are cached for 5 seconds. You can customize this default value using `DefaultMaxAge`. You can also customize this value for each parameter using `WithMaxAge`. If you'd like to always ensure you fetch the latest parameter from the store regardless if already available in cache, use `ForceFetch`. ``` using AWS.Lambda.Powertools.Parameters; using AWS.Lambda.Powertools.Parameters.SimpleSystemsManagement; public class Function { public async Task FunctionHandler (APIGatewayProxyRequest apigProxyEvent, ILambdaContext context) { // Get SSM Provider instance ISsmProvider ssmProvider = ParametersManager.SsmProvider .DefaultMaxAge(TimeSpan.FromSeconds(10)); // Retrieve a single parameter string? value = await ssmProvider .GetAsync("/my/parameter") .ConfigureAwait(false); } } ``` ``` using AWS.Lambda.Powertools.Parameters; using AWS.Lambda.Powertools.Parameters.SimpleSystemsManagement; public class Function { public async Task FunctionHandler (APIGatewayProxyRequest apigProxyEvent, ILambdaContext context) { // Get SSM Provider instance ISsmProvider ssmProvider = ParametersManager.SsmProvider; // Retrieve a single parameter string? value = await ssmProvider .WithMaxAge(TimeSpan.FromSeconds(10)) .GetAsync("/my/parameter") .ConfigureAwait(false); } } ``` ``` using AWS.Lambda.Powertools.Parameters; using AWS.Lambda.Powertools.Parameters.SimpleSystemsManagement; public class Function { public async Task FunctionHandler (APIGatewayProxyRequest apigProxyEvent, ILambdaContext context) { // Get SSM Provider instance ISsmProvider ssmProvider = ParametersManager.SsmProvider; // Retrieve a single parameter string? value = await ssmProvider .ForceFetch() .GetAsync("/my/parameter") .ConfigureAwait(false); } } ``` ### Transform values Parameter values can be transformed using `WithTransformation()`. Base64 and JSON transformations are provided. For more complex transformation, you need to specify how to deserialize by writing your own transfomer. ``` using AWS.Lambda.Powertools.Parameters; using AWS.Lambda.Powertools.Parameters.SimpleSystemsManagement; public class Function { public async Task FunctionHandler (APIGatewayProxyRequest apigProxyEvent, ILambdaContext context) { // Get SSM Provider instance ISsmProvider ssmProvider = ParametersManager.SsmProvider; // Retrieve a single parameter var value = await ssmProvider .WithTransformation(Transformation.Json) .GetAsync("/my/parameter/json") .ConfigureAwait(false); } } ``` ``` using AWS.Lambda.Powertools.Parameters; using AWS.Lambda.Powertools.Parameters.SimpleSystemsManagement; public class Function { public async Task FunctionHandler (APIGatewayProxyRequest apigProxyEvent, ILambdaContext context) { // Get SSM Provider instance ISsmProvider ssmProvider = ParametersManager.SsmProvider; // Retrieve a single parameter var value = await ssmProvider .WithTransformation(Transformation.Base64) .GetAsync("/my/parameter/b64") .ConfigureAwait(false); } } ``` #### Partial transform failures with `GetMultiple()` If you use `Transformation` with `GetMultiple()`, you can have a single malformed parameter value. To prevent failing the entire request, the method will return a `Null` value for the parameters that failed to transform. You can override this by using `RaiseTransformationError()`. If you do so, a single transform error will raise a **`TransformationException`** exception. ``` using AWS.Lambda.Powertools.Parameters; using AWS.Lambda.Powertools.Parameters.SimpleSystemsManagement; public class Function { public async Task FunctionHandler (APIGatewayProxyRequest apigProxyEvent, ILambdaContext context) { // Get SSM Provider instance ISsmProvider ssmProvider = ParametersManager.SsmProvider .RaiseTransformationError(); // Retrieve a single parameter var value = await ssmProvider .WithTransformation(Transformation.Json) .GetAsync("/my/parameter/json") .ConfigureAwait(false); } } ``` #### Auto-transform values on suffix If you use `Transformation` with `GetMultiple()`, you might want to retrieve and transform parameters encoded in different formats. You can do this with a single request by using `Transformation.Auto`. This will instruct any Parameter to to infer its type based on the suffix and transform it accordingly. ``` using AWS.Lambda.Powertools.Parameters; using AWS.Lambda.Powertools.Parameters.SimpleSystemsManagement; public class Function { public async Task FunctionHandler (APIGatewayProxyRequest apigProxyEvent, ILambdaContext context) { // Get SSM Provider instance ISsmProvider ssmProvider = ParametersManager.SsmProvider; // Retrieve multiple parameters from a path prefix // This returns a Dictionary with the parameter name as key IDictionary values = await ssmProvider .WithTransformation(Transformation.Auto) .GetMultipleAsync("/param") .ConfigureAwait(false); } } ``` For example, if you have two parameters with the following suffixes `.json` and `.binary`: | Parameter name | Parameter value | | --- | --- | | /param/a.json | [some encoded value] | | /param/a.binary | [some encoded value] | The return of `GetMultiple()` with `Transformation.Auto` will be a dictionary like: ``` { "a.json": [some value], "b.binary": [some value] } ``` ## Write your own Transformer You can write your own transformer, by implementing the `ITransformer` interface and the `Transform(string)` method. For example, if you wish to deserialize XML into an object. ``` public class XmlTransformer : ITransformer { public T? Transform(string value) { if (string.IsNullOrEmpty(value)) return default; var serializer = new XmlSerializer(typeof(T)); using var reader = new StringReader(value); return (T?)serializer.Deserialize(reader); } } ``` ``` var value = await ssmProvider .WithTransformation(new XmlTransformer()) .GetAsync("/my/parameter/xml") .ConfigureAwait(false); ``` ``` // Get SSM Provider instance ISsmProvider ssmProvider = ParametersManager.SsmProvider .AddTransformer("XML", new XmlTransformer()); // Retrieve a single parameter var value = await ssmProvider .WithTransformation("XML") .GetAsync("/my/parameter/xml") .ConfigureAwait(false); ``` ### Fluent API To simplify the use of the library, you can chain all method calls before a get. ``` ssmProvider .DefaultMaxAge(TimeSpan.FromSeconds(10)) // will set 10 seconds as the default cache TTL .WithMaxAge(TimeSpan.FromMinutes(1)) // will set the cache TTL for this value at 1 minute .WithTransformation(Transformation.Json) // Will use JSON transfomer to deserializes JSON to an object .WithDecryption() // enable decryption of the parameter value .Get("/my/param"); // finally get the value ``` ## Create your own provider You can create your own custom parameter provider by inheriting the `BaseProvider` class and implementing the `String getValue(String key)` method to retrieve data from your underlying store. All transformation and caching logic is handled by the get() methods in the base class. ``` public class S3Provider : ParameterProvider { private string _bucket; private readonly IAmazonS3 _client; public S3Provider() { _client = new AmazonS3Client(); } public S3Provider(IAmazonS3 client) { _client = client; } public S3Provider WithBucket(string bucket) { _bucket = bucket; return this; } protected override async Task GetAsync(string key, ParameterProviderConfiguration? config) { if (string.IsNullOrEmpty(key)) throw new ArgumentNullException(nameof(key)); if (string.IsNullOrEmpty(_bucket)) throw new ArgumentException("A bucket must be specified, using withBucket() method"); var request = new GetObjectRequest { Key = key, BucketName = _bucket }; using var response = await _client.GetObjectAsync(request); await using var responseStream = response.ResponseStream; using var reader = new StreamReader(responseStream); return await reader.ReadToEndAsync(); } protected override async Task> GetMultipleAsync(string path, ParameterProviderConfiguration? config) { if (string.IsNullOrEmpty(path)) throw new ArgumentNullException(nameof(path)); if (string.IsNullOrEmpty(_bucket)) throw new ArgumentException("A bucket must be specified, using withBucket() method"); var request = new ListObjectsV2Request { Prefix = path, BucketName = _bucket }; var response = await _client.ListObjectsV2Async(request); var result = new Dictionary(); foreach (var s3Object in response.S3Objects) { var value = await GetAsync(s3Object.Key); result.Add(s3Object.Key, value); } return result; } } ``` ``` var provider = new S3Provider(); var value = await provider .WithBucket("myBucket") .GetAsync("myKey") .ConfigureAwait(false); ``` # Getting Started # Getting Started with AWS Lambda Powertools for .NET Logger in Native AOT This tutorial shows you how to set up an AWS Lambda project using Native AOT compilation with Powertools for .NET Logger, addressing performance, trimming, and deployment considerations. ## Prerequisites - An AWS account with appropriate permissions - A code editor (we'll use Visual Studio Code in this tutorial) - .NET 8 SDK or later - Docker (required for cross-platform AOT compilation) ## 1. Understanding Native AOT Native AOT (Ahead-of-Time) compilation converts your .NET application directly to native code during build time rather than compiling to IL (Intermediate Language) code that gets JIT-compiled at runtime. Benefits for AWS Lambda include: - Faster cold start times (typically 50-70% reduction) - Lower memory footprint - No runtime JIT compilation overhead - No need for the full .NET runtime to be packaged with your Lambda ## 2. Installing Required Tools First, ensure you have the .NET 8 SDK installed: ``` dotnet --version ``` Install the AWS Lambda .NET CLI tools: ``` dotnet tool install -g Amazon.Lambda.Tools dotnet new install Amazon.Lambda.Templates ``` Verify installation: ``` dotnet lambda --help ``` ## 3. Creating a Native AOT Lambda Project Create a directory for your project: ``` mkdir powertools-aot-logger-demo cd powertools-aot-logger-demo ``` Create a new Lambda project using the Native AOT template: ``` dotnet new lambda.NativeAOT -n PowertoolsAotLoggerDemo cd PowertoolsAotLoggerDemo ``` ## 4. Adding the Powertools Logger Package Add the AWS.Lambda.Powertools.Logging package: ``` cd src/PowertoolsAotLoggerDemo dotnet add package AWS.Lambda.Powertools.Logging ``` ## 5. Implementing the Lambda Function with AOT-compatible Logger Let's modify the Function.cs file to implement our function with Powertools Logger in an AOT-compatible way: ``` using Amazon.Lambda.Core; using Amazon.Lambda.RuntimeSupport; using Amazon.Lambda.Serialization.SystemTextJson; using System.Text.Json.Serialization; using System.Text.Json; using AWS.Lambda.Powertools.Logging; using Microsoft.Extensions.Logging; namespace PowertoolsAotLoggerDemo; public class Function { private static ILogger _logger; private static async Task Main() { _logger = LoggerFactory.Create(builder => { builder.AddPowertoolsLogger(config => { config.Service = "TestService"; config.LoggerOutputCase = LoggerOutputCase.PascalCase; config.JsonOptions = new JsonSerializerOptions { TypeInfoResolver = LambdaFunctionJsonSerializerContext.Default }; }); }).CreatePowertoolsLogger(); Func handler = FunctionHandler; await LambdaBootstrapBuilder.Create(handler, new SourceGeneratorLambdaJsonSerializer()) .Build() .RunAsync(); } public static string FunctionHandler(string input, ILambdaContext context) { _logger.LogInformation("Processing input: {Input}", input); _logger.LogInformation("Processing context: {@Context}", context); return input.ToUpper(); } } [JsonSerializable(typeof(string))] [JsonSerializable(typeof(ILambdaContext))] // make sure to include ILambdaContext for serialization public partial class LambdaFunctionJsonSerializerContext : JsonSerializerContext { } ``` ## 6. Updating the Project File for AOT Compatibility ``` net8.0 enable enable true true full 0 true Size true true Lambda Exe false Size ``` ## 8. Cross-Platform Deployment Considerations Native AOT compilation must target the same OS and architecture as the deployment environment. AWS Lambda runs on Amazon Linux 2023 (AL2023) with x64 architecture. ### Building for AL2023 on Different Platforms #### Option A: Using the AWS Lambda .NET Tool with Docker The simplest approach is to use the AWS Lambda .NET tool, which handles the cross-platform compilation: ``` dotnet lambda deploy-function --function-name powertools-aot-logger-demo --function-role your-lambda-role-arn ``` This will: 1. Detect your project is using Native AOT 1. Use Docker behind the scenes to compile for Amazon Linux 1. Deploy the resulting function #### Option B: Using Docker Directly Alternatively, you can use Docker directly for more control: ##### On macOS/Linux: ``` # Create a build container using Amazon's provided image docker run --rm -v $(pwd):/workspace -w /workspace public.ecr.aws/sam/build-dotnet8:latest-x86_64 \ bash -c "cd src/PowertoolsAotLoggerDemo && dotnet publish -c Release -r linux-x64 -o publish" # Deploy using the AWS CLI cd src/PowertoolsAotLoggerDemo/publish zip -r function.zip * aws lambda create-function \ --function-name powertools-aot-logger-demo \ --runtime provided.al2023 \ --handler bootstrap \ --role arn:aws:iam::123456789012:role/your-lambda-role \ --zip-file fileb://function.zip ``` ##### On Windows: ``` # Create a build container using Amazon's provided image docker run --rm -v ${PWD}:/workspace -w /workspace public.ecr.aws/sam/build-dotnet8:latest-x86_64 ` bash -c "cd src/PowertoolsAotLoggerDemo && dotnet publish -c Release -r linux-x64 -o publish" # Deploy using the AWS CLI cd src\PowertoolsAotLoggerDemo\publish Compress-Archive -Path * -DestinationPath function.zip -Force aws lambda create-function ` --function-name powertools-aot-logger-demo ` --runtime provided.al2023 ` --handler bootstrap ` --role arn:aws:iam::123456789012:role/your-lambda-role ` --zip-file fileb://function.zip ``` ## 9. Testing the Function Test your Lambda function using the AWS CLI: ``` aws lambda invoke --function-name powertools-aot-logger-demo --payload '{"name":"PowertoolsAOT"}' response.json cat response.json ``` You should see a response like: ``` { "Level": "Information", "Message": "test", "Timestamp": "2025-05-06T09:52:19.8222787Z", "Service": "TestService", "ColdStart": true, "XrayTraceId": "1-6819dbd3-0de6dc4b6cc712b020ee8ae7", "Name": "AWS.Lambda.Powertools.Logging.Logger" } { "Level": "Information", "Message": "Processing context: Amazon.Lambda.RuntimeSupport.LambdaContext", "Timestamp": "2025-05-06T09:52:19.8232664Z", "Service": "TestService", "ColdStart": true, "XrayTraceId": "1-6819dbd3-0de6dc4b6cc712b020ee8ae7", "Name": "AWS.Lambda.Powertools.Logging.Logger", "Context": { "AwsRequestId": "20f8da57-002b-426d-84c2-c295e4797e23", "ClientContext": { "Environment": null, "Client": null, "Custom": null }, "FunctionName": "powertools-aot-logger-demo", "FunctionVersion": "$LATEST", "Identity": { "IdentityId": null, "IdentityPoolId": null }, "InvokedFunctionArn": "your arn", "Logger": {}, "LogGroupName": "/aws/lambda/powertools-aot-logger-demo", "LogStreamName": "2025/05/06/[$LATEST]71249d02013b42b9b044b42dd4c7c37a", "MemoryLimitInMB": 512, "RemainingTime": "00:00:29.9972216" } } ``` Check the logs in CloudWatch Logs to see the structured logs created by Powertools Logger. ## 10. Performance Considerations and Best Practices ### Trimming Considerations Native AOT uses aggressive trimming, which can cause issues with reflection-based code. Here are tips to avoid common problems: 1. **Using DynamicJsonSerializer**: If you're encountering trimming issues with JSON serialization, add a trimming hint: ``` [DynamicallyAccessedMembers(DynamicallyAccessedMemberTypes.PublicConstructors | DynamicallyAccessedMemberTypes.PublicFields | DynamicallyAccessedMemberTypes.PublicProperties)] public class MyRequestType { // Properties that will be preserved during trimming } ``` 1. **Logging Objects**: When logging objects with structural logging, consider creating simple DTOs instead of complex types: ``` // Instead of logging complex domain objects: Logger.LogInformation("User: {@user}", complexUserWithCircularReferences); // Create a simple loggable DTO: var userInfo = new { Id = user.Id, Name = user.Name, Status = user.Status }; Logger.LogInformation("User: {@userInfo}", userInfo); ``` 1. **Handling Reflection**: If you need reflection, explicitly preserve types: ``` ``` And in TrimmerRoots.xml: ``` ``` ### Lambda Configuration Best Practices 1. **Memory Settings**: Native AOT functions typically need less memory: ``` aws lambda update-function-configuration \ --function-name powertools-aot-logger-demo \ --memory-size 512 ``` 1. **Environment Variables**: Set the AWS_LAMBDA_DOTNET_PREJIT environment variable to 0 (it's not needed for AOT): ``` aws lambda update-function-configuration \ --function-name powertools-aot-logger-demo \ --environment Variables={AWS_LAMBDA_DOTNET_PREJIT=0} ``` 1. **ARM64 Support**: For even better performance, consider using ARM64 architecture: When creating your project: ``` dotnet new lambda.NativeAOT -n PowertoolsAotLoggerDemo --architecture arm64 ``` Or modify your deployment: ``` aws lambda update-function-configuration \ --function-name powertools-aot-logger-demo \ --architectures arm64 ``` ### Monitoring Cold Start Performance The Powertools Logger automatically logs cold start information. Use CloudWatch Logs Insights to analyze performance: ``` fields @timestamp, coldStart, billedDurationMs, maxMemoryUsedMB | filter functionName = "powertools-aot-logger-demo" | sort @timestamp desc | limit 100 ``` ## 11. Troubleshooting Common AOT Issues ### Missing Type Metadata If you see errors about missing metadata, you may need to add more types to your trimmer roots: ``` ``` ### Build Failures on macOS/Windows If you're building directly on macOS/Windows without Docker and encountering errors, remember that Native AOT is platform-specific. Always use the cross-platform build options mentioned earlier. ## Summary In this tutorial, you've learned: 1. How to set up a .NET Native AOT Lambda project with Powertools Logger 1. How to handle trimming concerns and ensure compatibility 1. Cross-platform build and deployment strategies for Amazon Linux 2023 1. Performance optimization techniques specific to AOT lambdas Native AOT combined with Powertools Logger gives you the best of both worlds: high-performance, low-latency Lambda functions with rich, structured logging capabilities. Next Steps Explore using the Embedded Metrics Format (EMF) with your Native AOT Lambda functions for enhanced observability, or try implementing Powertools Tracing in your Native AOT functions. # Getting Started with AWS Lambda Powertools for .NET Logger in ASP.NET Core Minimal APIs This tutorial shows you how to set up an ASP.NET Core Minimal API project with AWS Lambda Powertools for .NET Logger - covering installation of required tools through deployment and advanced logging features. ## Prerequisites - An AWS account with appropriate permissions - A code editor (we'll use Visual Studio Code in this tutorial) - .NET 8 SDK or later ## 1. Installing Required Tools First, ensure you have the .NET SDK installed. If not, you can download it from the [.NET download page](https://dotnet.microsoft.com/download/dotnet). ``` dotnet --version ``` You should see output like `8.0.100` or similar. Next, install the AWS Lambda .NET CLI tools: ``` dotnet tool install -g Amazon.Lambda.Tools dotnet new install Amazon.Lambda.Templates ``` Verify installation: ``` dotnet lambda --help ``` ## 2. Setting up AWS CLI credentials Ensure your AWS credentials are configured: ``` aws configure ``` Enter your AWS Access Key ID, Secret Access Key, default region, and output format. ## 3. Creating a New ASP.NET Core Minimal API Lambda Project Create a directory for your project: ``` mkdir powertools-aspnet-logger-demo cd powertools-aspnet-logger-demo ``` Create a new ASP.NET Minimal API project using the AWS Lambda template: ``` dotnet new serverless.AspNetCoreMinimalAPI --name PowertoolsAspNetLoggerDemo cd PowertoolsAspNetLoggerDemo/src/PowertoolsAspNetLoggerDemo ``` ## 4. Adding the Powertools Logger Package Add the AWS.Lambda.Powertools.Logging package: ``` dotnet add package AWS.Lambda.Powertools.Logging ``` ## 5. Implementing the Minimal API with Powertools Logger Let's modify the Program.cs file to implement our Minimal API with Powertools Logger: ``` using Microsoft.Extensions.Logging; using AWS.Lambda.Powertools.Logging; var builder = WebApplication.CreateBuilder(args); // Configure AWS Lambda // This is what connects the Events from API Gateway to the ASP.NET Core pipeline // In this case we are using HttpApi builder.Services.AddAWSLambdaHosting(LambdaEventSource.HttpApi); // Add Powertools Logger var logger = LoggerFactory.Create(builder => { builder.AddPowertoolsLogger(config => { config.Service = "powertools-aspnet-demo"; config.MinimumLogLevel = LogLevel.Debug; config.LoggerOutputCase = LoggerOutputCase.CamelCase; config.TimestampFormat = "yyyy-MM-dd HH:mm:ss.fff"; }); }).CreatePowertoolsLogger(); var app = builder.Build(); app.MapGet("/", () => { logger.LogInformation("Processing root request"); return "Hello from Powertools ASP.NET Core Minimal API!"; }); app.MapGet("/users/{id}", (string id) => { logger.LogInformation("Getting user with ID: {userId}", id); // Log a structured object var user = new User { Id = id, Name = "John Doe", Email = "john.doe@example.com" }; logger.LogDebug("User details: {@user}", user); return Results.Ok(user); }); app.Run(); // Simple user class for demonstration public class User { public string? Id { get; set; } public string? Name { get; set; } public string? Email { get; set; } public override string ToString() { return $"{Name} ({Id})"; } } ``` ## 6. Understanding the LoggerFactory Setup Let's examine the key parts of how we've set up the logger: ``` var logger = LoggerFactory.Create(builder => { builder.AddPowertoolsLogger(config => { config.Service = "powertools-aspnet-demo"; config.MinimumLogLevel = LogLevel.Debug; config.LoggerOutputCase = LoggerOutputCase.CamelCase; config.TimestampFormat = "yyyy-MM-dd HH:mm:ss.fff"; }); }).CreatePowertoolsLogger(); ``` This setup: 1. Creates a new `LoggerFactory` instance 1. Adds the Powertools Logger provider to the factory 1. Configures the logger with: 1. Service name that appears in all logs 1. Minimum logging level set to Information 1. CamelCase output format for JSON properties 1. Creates a Powertools logger instance from the factory ## 7. Building and Deploying the Lambda Function Build your function: ``` dotnet build ``` Deploy the function using the AWS Lambda CLI tools: We started from a serverless template but we are just going to deploy a Lambda function not an API Gateway. First update the `aws-lambda-tools-defaults.json` file with your details: ``` { "Information": [ ], "profile": "", "region": "", "configuration": "Release", "function-runtime": "dotnet8", "function-memory-size": 512, "function-timeout": 30, "function-handler": "PowertoolsAspNetLoggerDemo", "function-role": "arn:aws:iam::123456789012:role/my-role", "function-name": "PowertoolsAspNetLoggerDemo" } ``` IAM Role Make sure to replace the `function-role` with the ARN of an IAM role that has permissions to write logs to CloudWatch. Info As you can see the function-handler is set to `PowertoolsAspNetLoggerDemo` which is the name of the project. This example template uses [Executable assembly handlers](https://docs.aws.amazon.com/lambda/latest/dg/csharp-handler.html#csharp-executable-assembly-handlers) which use the assembly name as the handler. Then deploy the function: ``` dotnet lambda deploy-function ``` Follow the prompts to complete the deployment. ## 8. Testing the Function Test your Lambda function using the AWS CLI. The following command simulates an API Gateway payload, more information can be found in the [AWS Lambda documentation](https://docs.aws.amazon.com/apigateway/latest/developerguide/http-api-develop-integrations-lambda.html). ``` dotnet lambda invoke-function PowertoolsAspNetLoggerDemo --payload '{ "requestContext": { "http": { "method": "GET", "path": "/" } } }' ``` You should see a response and the logs in JSON format. ``` Payload: { "statusCode": 200, "headers": { "Content-Type": "text/plain; charset=utf-8" }, "body": "Hello from Powertools ASP.NET Core Minimal API!", "isBase64Encoded": false } Log Tail: START RequestId: cf670319-d9c4-4005-aebc-3afd08ae01e0 Version: $LATEST warn: Amazon.Lambda.AspNetCoreServer.AbstractAspNetCoreFunction[0] Request does not contain domain name information but is derived from APIGatewayProxyFunction. { "level": "Information", "message": "Processing root request", "timestamp": "2025-04-23T18:02:54.9014083Z", "service": "powertools-aspnet-demo", "coldStart": true, "xrayTraceId": "1-68092b4e-352be5201ea5b15b23854c44", "name": "AWS.Lambda.Powertools.Logging.Logger" } END RequestId: cf670319-d9c4-4005-aebc-3afd08ae01e0 ``` ## 9. Advanced Logging Features Now that we have basic logging set up, let's explore some advanced features of Powertools Logger. ### Adding Context with AppendKey You can add custom keys to all subsequent log messages: ``` app.MapGet("/users/{id}", (string id) => { // Add context to all subsequent logs Logger.AppendKey("userId", id); Logger.AppendKey("source", "users-api"); logger.LogInformation("Getting user with ID: {id}", id); // Log a structured object var user = new User { Id = id, Name = "John Doe", Email = "john.doe@example.com" }; logger.LogInformation("User details: {@user}", user); return Results.Ok(user); }); ``` This will add `userId` and `source` to all logs generated in this request context. This will output: ``` Payload: { "statusCode": 200, "headers": { "Content-Type": "application/json; charset=utf-8" }, "body": "{\"id\":\"1\",\"name\":\"John Doe\",\"email\":\"john.doe@example.com\"}", "isBase64Encoded": false } Log Tail: { "level": "Information", "message": "Getting user with ID: 1", "timestamp": "2025-04-23T18:21:28.5314300Z", "service": "powertools-aspnet-demo", "coldStart": true, "xrayTraceId": "1-68092fa7-64f070f7329650563b7501fe", "name": "AWS.Lambda.Powertools.Logging.Logger", "userId": "1", "source": "users-api" } { "level": "Information", "message": "User details: John Doe (1)", "timestamp": "2025-04-23T18:21:28.6491316Z", "service": "powertools-aspnet-demo", "coldStart": true, "xrayTraceId": "1-68092fa7-64f070f7329650563b7501fe", "name": "AWS.Lambda.Powertools.Logging.Logger", "userId": "1", "source": "users-api", "user": { // User object logged "id": "1", "name": "John Doe", "email": "john.doe@example.com" } } ``` ### Customizing Log Output You can customize the log output format: ``` builder.AddPowertoolsLogger(config => { config.Service = "powertools-aspnet-demo"; config.LoggerOutputCase = LoggerOutputCase.SnakeCase; // Change to snake_case config.TimestampFormat = "yyyy-MM-dd HH:mm:ss"; // Custom timestamp format }); ``` ### Log Sampling for Debugging When you need more detailed logs for a percentage of requests: ``` // In your logger factory setup builder.AddPowertoolsLogger(config => { config.Service = "powertools-aspnet-demo"; config.MinimumLogLevel = LogLevel.Information; // Normal level config.SamplingRate = 0.1; // 10% of requests will log at Debug level }); ``` ### Structured Logging Powertools Logger provides excellent support for structured logging: ``` app.MapPost("/products", (Product product) => { logger.LogInformation("Creating new product: {productName}", product.Name); // Log the entire object with all properties logger.LogDebug("Product details: {@product}", product); // Log the ToString() of the object logger.LogDebug("Product details: {product}", product); return Results.Created($"/products/{product.Id}", product); }); public class Product { public string Id { get; set; } = Guid.NewGuid().ToString(); public string Name { get; set; } = string.Empty; public decimal Price { get; set; } public string Category { get; set; } = string.Empty; public override string ToString() { return $"{Name} ({Id}) - {Category}: {Price:C}"; } } ``` ### Using Log Buffering For high-throughput applications, you can buffer lower-level logs and only flush them when needed: ``` var logger = LoggerFactory.Create(builder => { builder.AddPowertoolsLogger(config => { config.Service = "powertools-aspnet-demo"; config.LogBuffering = new LogBufferingOptions { BufferAtLogLevel = LogLevel.Debug, FlushOnErrorLog = true }; }); }).CreatePowertoolsLogger(); // Usage example app.MapGet("/process", () => { logger.LogDebug("Debug log 1"); // Buffered logger.LogDebug("Debug log 2"); // Buffered try { // Business logic that might fail throw new Exception("Something went wrong"); } catch (Exception ex) { // This will also flush all buffered logs logger.LogError(ex, "An error occurred"); return Results.Problem("Processing failed"); } // Manual flushing option // Logger.FlushBuffer(); return Results.Ok("Processed successfully"); }); ``` ### Correlation IDs For tracking requests across multiple services: ``` app.Use(async (context, next) => { // Extract correlation ID from headers if (context.Request.Headers.TryGetValue("X-Correlation-ID", out var correlationId)) { Logger.AppendKey("correlationId", correlationId.ToString()); } await next(); }); ``` ## 10. Best Practices for ASP.NET Minimal API Logging ### Register Logger as a Singleton For better performance, you can register the Powertools Logger as a singleton: ``` // In Program.cs builder.Services.AddSingleton(sp => { return LoggerFactory.Create(builder => { builder.AddPowertoolsLogger(config => { config.Service = "powertools-aspnet-demo"; }); }).CreatePowertoolsLogger(); }); // Then inject it in your handlers app.MapGet("/example", (ILogger logger) => { logger.LogInformation("Using injected logger"); return "Example with injected logger"; }); ``` ## 11. Viewing and Analyzing Logs After deploying your Lambda function, you can view the logs in AWS CloudWatch Logs. The structured JSON format makes it easy to search and analyze logs. Here's an example of what your logs will look like: ``` { "level": "Information", "message": "Getting user with ID: 123", "timestamp": "2023-04-15 14:23:45.123", "service": "powertools-aspnet-demo", "coldStart": true, "functionName": "PowertoolsAspNetLoggerDemo", "functionMemorySize": 256, "functionArn": "arn:aws:lambda:us-east-1:123456789012:function:PowertoolsAspNetLoggerDemo", "functionRequestId": "a1b2c3d4-e5f6-g7h8-i9j0-k1l2m3n4o5p6", "userId": "123" } ``` ## Summary In this tutorial, you've learned: 1. How to set up ASP.NET Core Minimal API with AWS Lambda 1. How to integrate Powertools Logger using the LoggerFactory approach 1. How to configure and customize the logger 1. Advanced logging features like structured logging, correlation IDs, and log buffering 1. Best practices for using the logger in an ASP.NET Core application Powertools for AWS Lambda Logger provides structured logging that makes it easier to search, analyze, and monitor your Lambda functions, and integrates seamlessly with ASP.NET Core Minimal APIs. Next Steps Explore integrating Powertools Tracing and Metrics with your ASP.NET Core Minimal API to gain even more observability insights. # Getting Started with AWS Lambda Powertools for .NET Logger This tutorial shows you how to set up a new AWS Lambda project with Powertools for .NET Logger from scratch - covering the installation of required tools through to deployment. ## Prerequisites - An AWS account with appropriate permissions - A code editor (we'll use Visual Studio Code in this tutorial) ## 1. Installing .NET SDK First, let's download and install the .NET SDK. You can find the latest version on the [.NET download page](https://dotnet.microsoft.com/download/dotnet). Make sure to install the latest version of the .NET SDK (8.0 or later). Verify installation: ``` dotnet --version ``` You should see output like `8.0.100` or similar (the version number may vary). ## 2. Installing AWS Lambda Tools for .NET CLI Install the AWS Lambda .NET CLI tools: ``` dotnet tool install -g Amazon.Lambda.Tools dotnet new install Amazon.Lambda.Templates ``` Verify installation: ``` dotnet lambda --help ``` You should see AWS Lambda CLI command help displayed. ## 3. Setting up AWS CLI credentials Ensure your AWS credentials are configured: ``` aws configure ``` Enter your AWS Access Key ID, Secret Access Key, default region, and output format. ## 4. Creating a New Lambda Project Create a directory for your project: ``` mkdir powertools-logger-demo cd powertools-logger-demo ``` Create a new Lambda project using the AWS Lambda template: ``` dotnet new lambda.EmptyFunction --name PowertoolsLoggerDemo cd PowertoolsLoggerDemo/src/PowertoolsLoggerDemo ``` ## 5. Adding the Powertools Logger Package Add the AWS.Lambda.Powertools.Logging and Amazon.Lambda.APIGatewayEvents packages: ``` dotnet add package AWS.Lambda.Powertools.Logging dotnet add package Amazon.Lambda.APIGatewayEvents ``` ## 6. Implementing the Lambda Function with Logger Let's modify the Function.cs file to implement our function with Powertools Logger: ``` using System.Net; using Amazon.Lambda.APIGatewayEvents; using Amazon.Lambda.Core; using AWS.Lambda.Powertools.Logging; // Assembly attribute to enable the Lambda function's JSON input to be converted into a .NET class. [assembly: LambdaSerializer(typeof(Amazon.Lambda.Serialization.SystemTextJson.DefaultLambdaJsonSerializer))] namespace PowertoolsLoggerDemo { public class Function { /// /// A simple function that returns a greeting /// /// API Gateway request object /// Lambda context /// API Gateway response object [Logging(Service = "greeting-service", LogLevel = Microsoft.Extensions.Logging.LogLevel.Information)] public async Task FunctionHandler(APIGatewayProxyRequest request, ILambdaContext context) { // you can {@} serialize objects to log them Logger.LogInformation("Processing request {@request}", request); // You can append additional keys to your logs Logger.AppendKey("QueryString", request.QueryStringParameters); // Simulate processing string name = "World"; if (request.QueryStringParameters != null && request.QueryStringParameters.ContainsKey("name")) { name = request.QueryStringParameters["name"]; Logger.LogInformation("Custom name provided: {name}", name); } else { Logger.LogInformation("Using default name"); } // Create response var response = new APIGatewayProxyResponse { StatusCode = (int)HttpStatusCode.OK, Body = $"Hello, {name}!", Headers = new Dictionary { { "Content-Type", "text/plain" } } }; Logger.LogInformation("Response successfully created"); return response; } } } ``` ## 7. Configuring the Lambda Project Let's update the aws-lambda-tools-defaults.json file with specific settings: ``` { "profile": "", "region": "", "configuration": "Release", "function-runtime": "dotnet8", "function-memory-size": 256, "function-timeout": 30, "function-handler": "PowertoolsLoggerDemo::PowertoolsLoggerDemo.Function::FunctionHandler", "function-name": "powertools-logger-demo", "function-role": "arn:aws:iam::123456789012:role/your_role_here" } ``` ## 8. Understanding Powertools Logger Features Let's examine some of the key features we've implemented: ### Service Attribute The `[Logging]` attribute configures the logger for our Lambda function: ``` [Logging(Service = "greeting-service", LogLevel = Microsoft.Extensions.Logging.LogLevel.Information)] ``` This sets: - The service name that will appear in all logs - The minimum logging level ### Structured Logging Powertools Logger supports structured logging with named placeholders: ``` Logger.LogInformation("Processing request {@request}", request); ``` This creates structured logs where `request` becomes a separate field in the JSON log output. ### Additional Context You can add custom fields to all subsequent logs: ``` Logger.AppendKey("QueryString", request.QueryStringParameters); ``` This adds the QueryString field with the key and value from the QueryStringParameters property. This can be an object like in the example or a simple value type. ## 9. Building and Deploying the Lambda Function Build your function: ``` dotnet build ``` Deploy the function using the AWS Lambda CLI tools: ``` dotnet lambda deploy-function ``` The tool will use the settings from aws-lambda-tools-defaults.json. If prompted, confirm the deployment settings. ## 10. Testing the Function Test your Lambda function using the AWS CLI: You should see: `Hello, Powertools!` and the logs in JSON format. ``` Payload: {"statusCode":200,"headers":{"Content-Type":"text/plain"},"body":"Hello, Powertools!","isBase64Encoded":false} Log Tail: {"level":"Information","message":"Processing request Amazon.Lambda.APIGatewayEvents.APIGatewayProxyRequest","timestamp":"2025-04-23T15:16:42.7473327Z","service":"greeting-service","cold_start":true,"function_name":"powertools-logger-demo","function_memory_size":512,"function_arn":"","function_request_id":"93f07a79-6146-4ed2-80d3-c0a06a5739e0","function_version":"$LATEST","xray_trace_id":"1-68090459-2c2aa3377cdaa9476348236a","name":"AWS.Lambda.Powertools.Logging.Logger","request":{"resource":null,"path":null,"http_method":null,"headers":null,"multi_value_headers":null,"query_string_parameters":{"name":"Powertools"},"multi_value_query_string_parameters":null,"path_parameters":null,"stage_variables":null,"request_context":null,"body":null,"is_base64_encoded":false}} {"level":"Information","message":"Custom name provided: Powertools","timestamp":"2025-04-23T15:16:42.9064561Z","service":"greeting-service","cold_start":true,"function_name":"powertools-logger-demo","function_memory_size":512,"function_arn":"","function_request_id":"93f07a79-6146-4ed2-80d3-c0a06a5739e0","function_version":"$LATEST","xray_trace_id":"1-68090459-2c2aa3377cdaa9476348236a","name":"AWS.Lambda.Powertools.Logging.Logger","query_string":{"name":"Powertools"}} {"level":"Information","message":"Response successfully created","timestamp":"2025-04-23T15:16:42.9082709Z","service":"greeting-service","cold_start":true,"function_name":"powertools-logger-demo","function_memory_size":512,"function_arn":"","function_request_id":"93f07a79-6146-4ed2-80d3-c0a06a5739e0","function_version":"$LATEST","xray_trace_id":"1-68090459-2c2aa3377cdaa9476348236a","name":"AWS.Lambda.Powertools.Logging.Logger","query_string":{"name":"Powertools"}} END RequestId: 98e69b78-f544-4928-914f-6c0902ac8678 REPORT RequestId: 98e69b78-f544-4928-914f-6c0902ac8678 Duration: 547.66 ms Billed Duration: 548 ms Memory Size: 512 MB Max Memory Used: 81 MB Init Duration: 278.70 ms ``` ## 11. Checking the Logs Visit the AWS CloudWatch console to see your structured logs. You'll notice: - JSON-formatted logs with consistent structure - Service name "greeting-service" in all logs - Additional fields like "query_string" - Cold start information automatically included - Lambda context information (function name, memory, etc.) Here's an example of what your logs will look like: ``` { "level": "Information", "message": "Processing request Amazon.Lambda.APIGatewayEvents.APIGatewayProxyRequest", "timestamp": "2025-04-23T15:16:42.7473327Z", "service": "greeting-service", "cold_start": true, "function_name": "powertools-logger-demo", "function_memory_size": 512, "function_arn": "", "function_request_id": "93f07a79-6146-4ed2-80d3-c0a06a5739e0", "function_version": "$LATEST", "xray_trace_id": "1-68090459-2c2aa3377cdaa9476348236a", "name": "AWS.Lambda.Powertools.Logging.Logger", "request": { "resource": null, "path": null, "http_method": null, "headers": null, "multi_value_headers": null, "query_string_parameters": { "name": "Powertools" }, "multi_value_query_string_parameters": null, "path_parameters": null, "stage_variables": null, "request_context": null, "body": null, "is_base64_encoded": false } } { "level": "Information", "message": "Response successfully created", "timestamp": "2025-04-23T15:16:42.9082709Z", "service": "greeting-service", "cold_start": true, "function_name": "powertools-logger-demo", "function_memory_size": 512, "function_arn": "", "function_request_id": "93f07a79-6146-4ed2-80d3-c0a06a5739e0", "function_version": "$LATEST", "xray_trace_id": "1-68090459-2c2aa3377cdaa9476348236a", "name": "AWS.Lambda.Powertools.Logging.Logger", "query_string": { "name": "Powertools" } } ``` ## Advanced Logger Features ### Correlation IDs Track requests across services by extracting correlation IDs: ``` [Logging(CorrelationIdPath = "/headers/x-correlation-id")] ``` ### Customizing Log Output Format You can change the casing style of the logs: ``` [Logging(LoggerOutputCase = LoggerOutputCase.CamelCase)] ``` Options include `CamelCase`, `PascalCase`, and `SnakeCase` (default). ## Summary In this tutorial, you've: 1. Installed the .NET SDK and AWS Lambda tools 1. Created a new Lambda project 1. Added and configured Powertools Logger 1. Deployed and tested your function Powertools for AWS Logger provides structured logging that makes it easier to search, analyze, and monitor your Lambda functions. The key benefits are: - JSON-formatted logs for better machine readability - Consistent structure across all logs - Automatic inclusion of Lambda context information - Ability to add custom fields for better context - Integration with AWS CloudWatch for centralized log management Next Steps Explore more advanced features like custom log formatters, log buffering, and integration with other Powertools utilities like Tracing and Metrics.