REST API
Feature status
This feature is under active development and may undergo significant changes. We recommend using it in non-critical workloads and providing feedback to help us improve it.
Event handler for Amazon API Gateway REST and HTTP APIs, Application Loader Balancer (ALB), Lambda Function URLs, and VPC Lattice.
Key Features¶
- Lightweight routing to reduce boilerplate for API Gateway REST/HTTP API, ALB and Lambda Function URLs.
- Built-in middleware engine for request/response transformation and validation.
- Works with micro function (one or a few routes) and monolithic functions (all routes)
Getting started¶
Install¶
This is not necessary if you're installing Powertools for AWS Lambda (TypeScript) via Lambda layer.
1 |
|
Required resources¶
If you're using any API Gateway integration, you must have an existing API Gateway Proxy integration or ALB configured to invoke your Lambda function.
In case of using VPC Lattice, you must have a service network configured to invoke your Lambda function.
This is the sample infrastructure for API Gateway and Lambda Function URLs we are using for the examples in this documentation. There is no additional permissions or dependencies required to use this utility.
See Infrastructure as Code (IaC) examples
AWS Serverless Application Model (SAM) example | |
---|---|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 |
|
Route events¶
Before you start defining your routes, it's important to understand how the event handler works with different types of events. The event handler can process events from API Gateway REST APIs, and will soon support HTTP APIs, ALB, Lambda Function URLs, and VPC Lattice as well.
When a request is received, the event handler will automatically convert the event into a Request
object and give you access to the current request context, including headers, query parameters, and request body, as well as path parameters via typed arguments.
Response auto-serialization¶
Want full control over the response, headers, and status code? Read about it in the Fine grained responses section.
For your convenience, when you return a JavaScript object from your route handler, we automatically perform these actions:
- Auto-serialize the response to JSON and trim whitespace
- Include the response under the appropriate equivalent of a
body
- Set the
Content-Type
header toapplication/json
- Set the HTTP status code to 200 (OK)
1 2 3 4 5 6 7 8 9 10 11 12 |
|
- This object will be serialized, trimmed, and included under the
body
key
1 2 3 4 5 6 7 8 |
|
Dynamic routes¶
You can use /todos/:todoId
to configure dynamic URL paths, where :todoId
will be resolved at runtime.
All dynamic route parameters will be available as typed object properties in the first argument of your route handler.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 |
|
1 2 3 4 5 |
|
You can also nest dynamic paths, for example /todos/:todoId/comments/:commentId
, where both :todoId
and :commentId
will be resolved at runtime.
HTTP Methods¶
You can use dedicated methods to specify the HTTP method that should be handled in each resolver. That is, app.<httpMethod>
, where the HTTP method could be delete
, get
, head
, patch
, post
, put
, options
.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 |
|
1 2 3 4 5 6 |
|
If you need to accept multiple HTTP methods in a single function, or support an HTTP method for which no dedicated method exists (i.e. TRACE
), you can use the route
method and pass a list of HTTP methods.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 |
|
Tip
We generally recommend to have separate functions for each HTTP method, as the functionality tends to differ depending on which method is used.
Data validation¶
Coming soon
Please open an issue if you would like us to prioritize this feature.
Accessing request details¶
You can access request details such as headers, query parameters, and body using the Request
object provided to your route handlers.
Handling not found routes¶
Coming soon
Please open an issue if you would like us to prioritize this feature.
Error handling¶
Coming soon
Please open an issue if you would like us to prioritize this feature.
Throwing HTTP errors¶
Coming soon
Please open an issue if you would like us to prioritize this feature.
Enabling SwaggerUI¶
Coming soon
Please open an issue if you would like us to prioritize this feature.
Custom domains¶
Coming soon
Please open an issue if you would like us to prioritize this feature.
Advanced¶
CORS¶
You can configure CORS at the router level via the cors
middleware.
Coming soon
Middleware¶
Middleware are functions that execute during the request-response cycle, sitting between the incoming request and your route handler. They provide a way to implement cross-cutting concerns like authentication, logging, validation, and response transformation without cluttering your route handlers.
Each middleware function receives the following arguments:
- params Route parameters extracted from the URL path
- reqCtx Request context containing the event, Lambda context, request, and response objects
- next A function to pass control to the next middleware in the chain
Middleware can be applied on specific routes, globally on all routes, or a combination of both.
Middleware execution follows an onion pattern where global middleware runs first in pre-processing, then route-specific middleware. After the handler executes, the order reverses for post-processing. When middleware modify the same response properties, the middleware that executes last in post-processing wins.
sequenceDiagram
participant Request
participant Router
participant GM as Global Middleware
participant RM as Route Middleware
participant Handler as Route Handler
Request->>Router: Incoming Request
Router->>GM: Execute (params, reqCtx, next)
Note over GM: Pre-processing
GM->>RM: Call next()
Note over RM: Pre-processing
RM->>Handler: Call next()
Note over Handler: Execute handler
Handler-->>RM: Return
Note over RM: Post-processing
RM-->>GM: Return
Note over GM: Post-processing
GM-->>Router: Return
Router-->>Request: Response
Registering middleware¶
You can use app.use
to register middleware that should always run regardless of the route
and you can apply middleware to specific routes by passing them as arguments before the route
handler.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 |
|
1 2 3 4 5 6 7 8 9 10 |
|
Returning early¶
There are cases where you may want to terminate the execution of the middleware chain early. To
do so, middleware can short-circuit processing by returning a Response
or JSON object
instead of calling next()
. Neither the handler nor any subsequent middleware will run
but the post-processing of already executed middleware will.
sequenceDiagram
participant Request
participant Router
participant M1 as Middleware 1
participant M2 as Middleware 2
participant M3 as Middleware 3
participant Handler as Route Handler
Request->>Router: Incoming Request
Router->>M1: Execute (params, reqCtx, next)
Note over M1: Pre-processing
M1->>M2: Call next()
Note over M2: Pre-processing
M2->>M2: Return Response (early return)
Note over M2: Post-processing
M2-->>M1: Return Response
Note over M1: Post-processing
M1-->>Router: Return Response
Router-->>Request: Response
Note over M3,Handler: Never executed
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 |
|
1 2 3 4 5 6 7 8 |
|
Error Handling¶
By default, any unhandled error in the middleware chain will be propagated as a HTTP 500 back to the client. As you would expect, unlike early return, this stops the middleware chain entirely and no post-processing steps for any previously executed middleware will occur.
sequenceDiagram
participant Request
participant Router
participant EH as Error Handler
participant M1 as Middleware 1
participant M2 as Middleware 2
participant Handler as Route Handler
Request->>Router: Incoming Request
Router->>M1: Execute (params, reqCtx, next)
Note over M1: Pre-processing
M1->>M2: Call next()
Note over M2: Throws Error
M2-->>M1: Error propagated
M1-->>Router: Error propagated
Router->>EH: Handle error
EH-->>Router: HTTP 500 Response
Router-->>Request: HTTP 500 Error
Note over Handler: Never executed
You can handle errors in middleware as you would anywhere else, simply surround your code in
a try
/catch
block and processing will occur as usual.
sequenceDiagram
participant Request
participant Router
participant M1 as Middleware 1
participant M2 as Middleware 2
participant Handler as Route Handler
Request->>Router: Incoming Request
Router->>M1: Execute (params, reqCtx, next)
Note over M1: Pre-processing
M1->>M2: Call next()
Note over M2: Error thrown & caught
Note over M2: Handle error gracefully
M2->>Handler: Call next()
Note over Handler: Execute handler
Handler-->>M2: Return
Note over M2: Post-processing
M2-->>M1: Return
Note over M1: Post-processing
M1-->>Router: Return
Router-->>Request: Response
Similarly, you can choose to stop processing entirely by throwing an error in your middleware. Event handler provides many built-in HTTP errors that you can use or you can throw a custom error of your own. As noted above, this means that no post-processing of your request will occur.
sequenceDiagram
participant Request
participant Router
participant EH as Error Handler
participant M1 as Middleware 1
participant M2 as Middleware 2
participant Handler as Route Handler
Request->>Router: Incoming Request
Router->>M1: Execute (params, reqCtx, next)
Note over M1: Pre-processing
M1->>M2: Call next()
Note over M2: Intentionally throws error
M2-->>M1: Error propagated
M1-->>Router: Error propagated
Router->>EH: Handle error
EH-->>Router: HTTP Error Response
Router-->>Request: HTTP Error Response
Note over Handler: Never executed
Custom middleware¶
A common pattern to create reusable middleware is to implement a factory functions that accepts configuration options and returns a middleware function.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 |
|
In this example we have a middleware that acts only in the post-processing stage as all
the logic occurs after the next
function has been called. This is so as to ensure that
the handler has run and we have access to request body.
Avoiding destructuring pitfalls¶
Critical: Never destructure the response object
When writing middleware, always access the response through reqCtx.res
rather than destructuring { res }
from the request context. Destructuring captures a reference to the original response object, which becomes stale when middleware replaces the response.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 |
|
During the middleware execution chain, the response object (reqCtx.res
) can be replaced by
other middleware or the route handler. When you destructure the request context, you capture
a reference to the response object as it existed at that moment, not the current response.
Composing middleware¶
You can create reusable middleware stacks by using the composeMiddleware
function to combine
multiple middleware into a single middleware function. This is useful for creating standardized
middleware combinations that can be shared across different routes or applications.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 |
|
The composeMiddleware
function maintains the same execution order as if you had applied the
middleware individually, following the onion pattern where middleware execute in order during
pre-processing and in reverse order during post-processing.
Composition order
Unlike traditional function composition which typically works right-to-left, composeMiddleware
follows the convention used by most web frameworks and executes middleware left-to-right (first to last in the array). This means composeMiddleware([a, b, c])
executes middleware a
first, then b
, then c
.
Being a good citizen¶
Middleware can add subtle improvements to request/response processing, but also add significant complexity if you're not careful.
Keep the following in mind when authoring middleware for Event Handler:
- Call the next middleware. If you are not returning early by returning a
Response
object or JSON object, always ensure you call thenext
function. - Keep a lean scope. Focus on a single task per middleware to ease composability and maintenance.
- Catch your own errors. Catch and handle known errors to your logic, unless you want to raise HTTP Errors, or propagate specific errors to the client.
- Avoid destructuring the response object. As mentioned in the destructuring pitfalls section, always access the response through
reqCtx.res
rather than destructuring to avoid stale references.
Fine grained responses¶
You can use the Web API's Response
object to have full control over the response. For
example, you might want to add additional headers, cookies, or set a custom content type.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 |
|
1 2 3 4 5 6 7 8 9 |
|
Response streaming¶
Coming soon
Please open an issue if you would like us to prioritize this feature.
Compress¶
You can compress with gzip and base64 encode your responses via the compress
parameter. You have the option to pass the compress
parameter when working with a specific route or setting the correct Accept-Encoding
header in the Response
object.
Coming soon
Please open an issue if you would like us to prioritize this feature.
Binary responses¶
Using API Gateway?
Amazon API Gateway does not support */*
binary media type when CORS is also configured. This feature requires API Gateway to configure binary media types, see our sample infrastructure for reference.
For convenience, we automatically base64 encode binary responses. You can also use it in combination with the compress
parameter if your client supports gzip.
Like the compress
feature, the client must send the Accept
header with the correct media type.
Tip
Lambda Function URLs handle binary media types automatically.
Coming soon
Please open an issue if you would like us to prioritize this feature.
Debug mode¶
You can enable debug mode via the POWERTOOLS_DEV
environment variable.
This will enable full stack traces errors in the response, log request and responses, and set CORS in development mode.
Coming soon
Please open an issue if you would like us to prioritize this feature.
OpenAPI¶
When you enable Data Validation, we use a combination of Zod and JSON Schemas to add constraints to your API's parameters.
In OpenAPI documentation tools like SwaggerUI, these annotations become readable descriptions, offering a self-explanatory API interface. This reduces boilerplate code while improving functionality and enabling auto-documentation.
Coming soon
Please open an issue if you would like us to prioritize this feature.
Split routers¶
As you grow the number of routes a given Lambda function should handle, it is natural to either break into smaller Lambda functions, or split routes into separate files to ease maintenance - that's where the split Router
feature is useful.
Coming soon
Please open an issue if you would like us to prioritize this feature.
Considerations¶
This utility is optimized for AWS Lambda computing model and prioritizes fast startup, minimal feature set, and quick onboarding for triggers supported by Lambda.
Event Handler naturally leads to a single Lambda function handling multiple routes for a given service, which can be eventually broken into multiple functions.
Both single (monolithic) and multiple functions (micro) offer different set of trade-offs worth knowing.
TL;DR;
Start with a monolithic function, add additional functions with new handlers, and possibly break into micro functions if necessary.
Monolithic function¶
A monolithic function means that your final code artifact will be deployed to a single function. This is generally the best approach to start.
Benefits
- Code reuse. It's easier to reason about your service, modularize it and reuse code as it grows. Eventually, it can be turned into a standalone library.
- No custom tooling. Monolithic functions are treated just like normal Typescript packages; no upfront investment in tooling.
- Faster deployment and debugging. Whether you use all-at-once, linear, or canary deployments, a monolithic function is a single deployable unit. IDEs like WebStorm and VSCode have tooling to quickly profile, visualize, and step through debug any Typescript package.
Downsides
- Cold starts. Frequent deployments and/or high load can diminish the benefit of monolithic functions depending on your latency requirements, due to the Lambda scaling model. Always load test to find a pragmatic balance between customer experience and developer cognitive load.
- Granular security permissions. The micro function approach enables you to use fine-grained permissions and access controls, separate external dependencies and code signing at the function level. Conversely, you could have multiple functions while duplicating the final code artifact in a monolithic approach. Regardless, least privilege can be applied to either approaches.
- Higher risk per deployment. A misconfiguration or invalid import can cause disruption if not caught early in automated testing. Multiple functions can mitigate misconfigurations but they will still share the same code artifact. You can further minimize risks with multiple environments in your CI/CD pipeline.
Micro function¶
A micro function means that your final code artifact will be different to each function deployed. This is generally the approach to start if you're looking for fine-grain control and/or high load on certain parts of your service.
Benefits
- Granular scaling. A micro function can benefit from the Lambda scaling model to scale differently depending on each part of your application. Concurrency controls and provisioned concurrency can also be used at a granular level for capacity management.
- Discoverability. Micro functions are easier to visualize when using distributed tracing. Their high-level architectures can be self-explanatory, and complexity is highly visible — assuming each function is named after the business purpose it serves.
- Package size. An independent function can be significantly smaller (KB vs MB) depending on the external dependencies it requires to perform its purpose. Conversely, a monolithic approach can benefit from Lambda Layers to optimize builds for external dependencies.
Downsides
- Upfront investment. You need custom build tooling to bundle assets, including native bindings for runtime compatibility. Operations become more elaborate — you need to standardize tracing labels/annotations, structured logging, and metrics to pinpoint root causes.
- Engineering discipline is necessary for both approaches. However, the micro-function approach requires further attention to consistency as the number of functions grow, just like any distributed system.
- Harder to share code. Shared code must be carefully evaluated to avoid unnecessary deployments when this code changes. Equally, if shared code isn't a library, your development, building, deployment tooling need to accommodate the distinct layout.
- Slower safe deployments. Safely deploying multiple functions require coordination — AWS CodeDeploy deploys and verifies each function sequentially. This increases lead time substantially (minutes to hours) depending on the deployment strategy you choose. You can mitigate it by selectively enabling it in prod-like environments only, and where the risk profile is applicable. Automated testing, operational and security reviews are essential to stability in either approaches.
Testing your code¶
Coming soon
Please open an issue if you would like us to prioritize this section.