Skip to content

REST API

Feature status

This feature is under active development and may undergo significant changes. We recommend using it in non-critical workloads and providing feedback to help us improve it.

Event handler for Amazon API Gateway REST and HTTP APIs, Application Loader Balancer (ALB), Lambda Function URLs, and VPC Lattice.

Key Features

  • Lightweight routing to reduce boilerplate for API Gateway REST/HTTP API, ALB and Lambda Function URLs.
  • Built-in middleware engine for request/response transformation and validation.
  • Works with micro function (one or a few routes) and monolithic functions (all routes)

Getting started

Install

This is not necessary if you're installing Powertools for AWS Lambda (TypeScript) via Lambda layer.

1
npm install @aws-lambda-powertools/event-handler

Required resources

If you're using any API Gateway integration, you must have an existing API Gateway Proxy integration or ALB configured to invoke your Lambda function.

In case of using VPC Lattice, you must have a service network configured to invoke your Lambda function.

This is the sample infrastructure for API Gateway and Lambda Function URLs we are using for the examples in this documentation. There is no additional permissions or dependencies required to use this utility.

See Infrastructure as Code (IaC) examples
AWS Serverless Application Model (SAM) example
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
AWSTemplateFormatVersion: "2010-09-09"
Transform: AWS::Serverless-2016-10-31
Description: Hello world event handler API Gateway

Globals:
  Api:
    TracingEnabled: true
    Cors: # see CORS section
      AllowOrigin: "'https://example.com'"
      AllowHeaders: "'Content-Type,Authorization,X-Amz-Date'"
      MaxAge: "'300'"
    BinaryMediaTypes: # see Binary responses section
      - "*~1*" # converts to */* for any binary type
      # NOTE: use this stricter version if you're also using CORS; */* doesn't work with CORS
      # see: https://github.com/aws-powertools/powertools-lambda-python/issues/3373#issuecomment-1821144779
      # - "image~1*" # converts to image/*
      # - "*~1csv" # converts to */csv, eg text/csv, application/csv

  Function:
    Timeout: 5
    MemorySize: 256
    Runtime: nodejs22.x
    Tracing: Active
    Environment:
      Variables:
        POWERTOOLS_LOG_LEVEL: INFO
        POWERTOOLS_SERVICE_NAME: hello

Resources:
  ApiFunction:
    Type: AWS::Serverless::Function
    Properties:
      Handler: index.handler
      CodeUri: hello_world
      Description: API handler function
      Events:
        AnyApiEvent:
          Type: Api
          Properties:
            # NOTE: this is a catch-all rule to simplify the documentation.
            # explicit routes and methods are recommended for prod instead (see below)
            Path: /{proxy+} # Send requests on any path to the lambda function
            Method: ANY # Send requests using any http method to the lambda function
        GetAllTodos:
           Type: Api
           Properties:
             Path: /todos
             Method: GET
        GetTodoById:
           Type: Api
           Properties:
             Path: /todos/{todo_id}
             Method: GET

Route events

Before you start defining your routes, it's important to understand how the event handler works with different types of events. The event handler can process events from API Gateway REST APIs, and will soon support HTTP APIs, ALB, Lambda Function URLs, and VPC Lattice as well.

When a request is received, the event handler will automatically convert the event into a Request object and give you access to the current request context, including headers, query parameters, and request body, as well as path parameters via typed arguments.

Response auto-serialization

Want full control over the response, headers, and status code? Read about it in the Fine grained responses section.

For your convenience, when you return a JavaScript object from your route handler, we automatically perform these actions:

  • Auto-serialize the response to JSON and trim whitespace
  • Include the response under the appropriate equivalent of a body
  • Set the Content-Type header to application/json
  • Set the HTTP status code to 200 (OK)
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
import { Router } from '@aws-lambda-powertools/event-handler/experimental-rest';
import type { Context } from 'aws-lambda';

const app = new Router();

app.get('/ping', async () => {
  return { message: 'pong' }; // (1)!
});

export const handler = async (event: unknown, context: Context) => {
  return app.resolve(event, context);
};
  1. This object will be serialized, trimmed, and included under the body key
1
2
3
4
5
6
7
8
{
  "statusCode": 200,
  "headers": {
    "Content-Type": "application/json"
  },
  "body": "{'message':'pong'}",
  "isBase64Encoded": false
}

Dynamic routes

You can use /todos/:todoId to configure dynamic URL paths, where :todoId will be resolved at runtime.

All dynamic route parameters will be available as typed object properties in the first argument of your route handler.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
import { Router } from '@aws-lambda-powertools/event-handler/experimental-rest';
import { Logger } from '@aws-lambda-powertools/logger';
import {
  correlationPaths,
  search,
} from '@aws-lambda-powertools/logger/correlationId';
import type { Context } from 'aws-lambda/handler';

const logger = new Logger({
  correlationIdSearchFn: search,
});
const app = new Router({ logger });

app.get('/todos/:todoId', async ({ todoId }) => {
  const todo = await getTodoById(todoId);
  return { todo };
});

export const handler = async (event: unknown, context: Context) => {
  // You can continue using other utilities just as before
  logger.addContext(context);
  logger.setCorrelationId(event, correlationPaths.API_GATEWAY_REST);
  return app.resolve(event, context);
};
1
2
3
4
5
{
  "resource": "/todos/{id}",
  "path": "/todos/1",
  "httpMethod": "GET"
}

You can also nest dynamic paths, for example /todos/:todoId/comments/:commentId, where both :todoId and :commentId will be resolved at runtime.

HTTP Methods

You can use dedicated methods to specify the HTTP method that should be handled in each resolver. That is, app.<httpMethod>, where the HTTP method could be delete, get, head, patch, post, put, options.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
import { Router } from '@aws-lambda-powertools/event-handler/experimental-rest';
import { Logger } from '@aws-lambda-powertools/logger';
import {
  correlationPaths,
  search,
} from '@aws-lambda-powertools/logger/correlationId';
import type { Context } from 'aws-lambda/handler';

const logger = new Logger({
  correlationIdSearchFn: search,
});
const app = new Router({ logger });

app.post('/todos', async (_, { request }) => {
  const body = await request.json();
  const todo = await putTodo(body);

  return todo;
});

export const handler = async (event: unknown, context: Context) => {
  // You can continue using other utilities just as before
  logger.addContext(context);
  logger.setCorrelationId(event, correlationPaths.API_GATEWAY_REST);
  return app.resolve(event, context);
};
1
2
3
4
5
6
{
  "resource": "/todos",
  "path": "/todos",
  "httpMethod": "POST",
  "body": "{\"title\": \"foo\", \"userId\": 1, \"completed\": false}"
}

If you need to accept multiple HTTP methods in a single function, or support an HTTP method for which no dedicated method exists (i.e. TRACE), you can use the route method and pass a list of HTTP methods.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
import { Router } from '@aws-lambda-powertools/event-handler/experimental-rest';
import { Logger } from '@aws-lambda-powertools/logger';
import {
  correlationPaths,
  search,
} from '@aws-lambda-powertools/logger/correlationId';
import type { Context } from 'aws-lambda/handler';

const logger = new Logger({
  correlationIdSearchFn: search,
});
const app = new Router({ logger });

app.route(
  async (_, { request }) => {
    const body = await request.json();
    const todo = await putTodo(body);

    return todo;
  },
  {
    path: '/todos',
    method: ['POST', 'PUT'],
  }
);

export const handler = async (event: unknown, context: Context) => {
  // You can continue using other utilities just as before
  logger.addContext(context);
  logger.setCorrelationId(event, correlationPaths.API_GATEWAY_REST);
  return app.resolve(event, context);
};

Tip

We generally recommend to have separate functions for each HTTP method, as the functionality tends to differ depending on which method is used.

Data validation

Coming soon

Please open an issue if you would like us to prioritize this feature.

Accessing request details

You can access request details such as headers, query parameters, and body using the Request object provided to your route handlers.

Handling not found routes

Coming soon

Please open an issue if you would like us to prioritize this feature.

Error handling

Coming soon

Please open an issue if you would like us to prioritize this feature.

Throwing HTTP errors

Coming soon

Please open an issue if you would like us to prioritize this feature.

Enabling SwaggerUI

Coming soon

Please open an issue if you would like us to prioritize this feature.

Custom domains

Coming soon

Please open an issue if you would like us to prioritize this feature.

Advanced

CORS

You can configure CORS at the router level via the cors middleware.

Coming soon

Middleware

Middleware are functions that execute during the request-response cycle, sitting between the incoming request and your route handler. They provide a way to implement cross-cutting concerns like authentication, logging, validation, and response transformation without cluttering your route handlers.

Each middleware function receives the following arguments:

  • params Route parameters extracted from the URL path
  • reqCtx Request context containing the event, Lambda context, request, and response objects
  • next A function to pass control to the next middleware in the chain

Middleware can be applied on specific routes, globally on all routes, or a combination of both.

Middleware execution follows an onion pattern where global middleware runs first in pre-processing, then route-specific middleware. After the handler executes, the order reverses for post-processing. When middleware modify the same response properties, the middleware that executes last in post-processing wins.

sequenceDiagram
    participant Request
    participant Router
    participant GM as Global Middleware
    participant RM as Route Middleware
    participant Handler as Route Handler

    Request->>Router: Incoming Request
    Router->>GM: Execute (params, reqCtx, next)
    Note over GM: Pre-processing
    GM->>RM: Call next()
    Note over RM: Pre-processing
    RM->>Handler: Call next()
    Note over Handler: Execute handler
    Handler-->>RM: Return
    Note over RM: Post-processing
    RM-->>GM: Return
    Note over GM: Post-processing
    GM-->>Router: Return
    Router-->>Request: Response

Registering middleware

You can use app.use to register middleware that should always run regardless of the route and you can apply middleware to specific routes by passing them as arguments before the route handler.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
import { Router } from '@aws-lambda-powertools/event-handler/experimental-rest';
import type { Middleware } from '@aws-lambda-powertools/event-handler/types';
import { Logger } from '@aws-lambda-powertools/logger';
import type { Context } from 'aws-lambda';

const logger = new Logger();
const app = new Router({ logger });

// Global middleware - executes first in pre-processing, last in post-processing
app.use(async (params, reqCtx, next) => {
  reqCtx.res.headers.set('x-pre-processed-by', 'global-middleware');
  await next();
  reqCtx.res.headers.set('x-post-processed-by', 'global-middleware');
});

// Route-specific middleware - executes second in pre-processing, first in post-processing
const routeMiddleware: Middleware = async (params, reqCtx, next) => {
  reqCtx.res.headers.set('x-pre-processed-by', 'route-middleware');
  await next();
  reqCtx.res.headers.set('x-post-processed-by', 'route-middleware');
};

app.get('/todos', async () => {
  const todos = await getAllTodos();
  return { todos };
});

// This route will have:
// x-pre-processed-by: route-middleware (route middleware overwrites global)
// x-post-processed-by: global-middleware (global middleware executes last)
app.post('/todos', [routeMiddleware], async (params, reqCtx) => {
  const body = await reqCtx.request.json();
  const todo = await putTodo(body);
  return todo;
});

export const handler = async (event: unknown, context: Context) => {
  return app.resolve(event, context);
};
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
{
  "statusCode": 200,
  "body": "{\"id\":\"123\",\"title\":\"New todo\"}",
  "headers": {
    "content-type": "application/json",
    "x-pre-processed-by": "route-middleware",
    "x-post-processed-by": "global-middleware"
  },
  "isBase64Encoded": false
}

Returning early

There are cases where you may want to terminate the execution of the middleware chain early. To do so, middleware can short-circuit processing by returning a Response or JSON object instead of calling next(). Neither the handler nor any subsequent middleware will run but the post-processing of already executed middleware will.

sequenceDiagram
    participant Request
    participant Router
    participant M1 as Middleware 1
    participant M2 as Middleware 2
    participant M3 as Middleware 3
    participant Handler as Route Handler

    Request->>Router: Incoming Request
    Router->>M1: Execute (params, reqCtx, next)
    Note over M1: Pre-processing
    M1->>M2: Call next()
    Note over M2: Pre-processing
    M2->>M2: Return Response (early return)
    Note over M2: Post-processing
    M2-->>M1: Return Response
    Note over M1: Post-processing
    M1-->>Router: Return Response
    Router-->>Request: Response
    Note over M3,Handler: Never executed
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
import { Router } from '@aws-lambda-powertools/event-handler/experimental-rest';
import type { Middleware } from '@aws-lambda-powertools/event-handler/types';
import { Logger } from '@aws-lambda-powertools/logger';
import type { Context } from 'aws-lambda';

const logger = new Logger();
const app = new Router({ logger });

// Authentication middleware - returns early if no auth header
const authMiddleware: Middleware = async (params, reqCtx, next) => {
  const authHeader = reqCtx.request.headers.get('authorization');

  if (!authHeader) {
    return new Response(JSON.stringify({ error: 'Unauthorized' }), {
      status: 401,
      headers: { 'Content-Type': 'application/json' },
    });
  }

  await next();
};

// Logging middleware - never executes when auth fails
const loggingMiddleware: Middleware = async (params, reqCtx, next) => {
  logger.info('Request processed');
  await next();
};

app.use(authMiddleware);
app.use(loggingMiddleware);

app.get('/todos', async () => {
  const todos = await getAllTodos();
  return { todos };
});

export const handler = async (event: unknown, context: Context) => {
  return app.resolve(event, context);
};
1
2
3
4
5
6
7
8
{
  "statusCode": 401,
  "body": "{\"error\":\"Unauthorized\"}",
  "headers": {
    "Content-Type": "application/json"
  },
  "isBase64Encoded": false
}

Error Handling

By default, any unhandled error in the middleware chain will be propagated as a HTTP 500 back to the client. As you would expect, unlike early return, this stops the middleware chain entirely and no post-processing steps for any previously executed middleware will occur.

sequenceDiagram
    participant Request
    participant Router
    participant EH as Error Handler
    participant M1 as Middleware 1
    participant M2 as Middleware 2
    participant Handler as Route Handler

    Request->>Router: Incoming Request
    Router->>M1: Execute (params, reqCtx, next)
    Note over M1: Pre-processing
    M1->>M2: Call next()
    Note over M2: Throws Error
    M2-->>M1: Error propagated
    M1-->>Router: Error propagated
    Router->>EH: Handle error
    EH-->>Router: HTTP 500 Response
    Router-->>Request: HTTP 500 Error
    Note over Handler: Never executed

Unhandled errors

You can handle errors in middleware as you would anywhere else, simply surround your code in a try/catch block and processing will occur as usual.

sequenceDiagram
    participant Request
    participant Router
    participant M1 as Middleware 1
    participant M2 as Middleware 2
    participant Handler as Route Handler

    Request->>Router: Incoming Request
    Router->>M1: Execute (params, reqCtx, next)
    Note over M1: Pre-processing
    M1->>M2: Call next()
    Note over M2: Error thrown & caught
    Note over M2: Handle error gracefully
    M2->>Handler: Call next()
    Note over Handler: Execute handler
    Handler-->>M2: Return
    Note over M2: Post-processing
    M2-->>M1: Return
    Note over M1: Post-processing
    M1-->>Router: Return
    Router-->>Request: Response

Handled errors

Similarly, you can choose to stop processing entirely by throwing an error in your middleware. Event handler provides many built-in HTTP errors that you can use or you can throw a custom error of your own. As noted above, this means that no post-processing of your request will occur.

sequenceDiagram
    participant Request
    participant Router
    participant EH as Error Handler
    participant M1 as Middleware 1
    participant M2 as Middleware 2
    participant Handler as Route Handler

    Request->>Router: Incoming Request
    Router->>M1: Execute (params, reqCtx, next)
    Note over M1: Pre-processing
    M1->>M2: Call next()
    Note over M2: Intentionally throws error
    M2-->>M1: Error propagated
    M1-->>Router: Error propagated
    Router->>EH: Handle error
    EH-->>Router: HTTP Error Response
    Router-->>Request: HTTP Error Response
    Note over Handler: Never executed

Intentional errors

Custom middleware

A common pattern to create reusable middleware is to implement a factory functions that accepts configuration options and returns a middleware function.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
import { Router } from '@aws-lambda-powertools/event-handler/experimental-rest';
import type { Middleware } from '@aws-lambda-powertools/event-handler/types';
import type { Context } from 'aws-lambda';

interface CompressOptions {
  threshold?: number;
  level?: number;
}

// Factory function that returns middleware
const compress = (options: CompressOptions = {}): Middleware => {
  return async (params, reqCtx, next) => {
    await next();

    // Check if response should be compressed
    const body = await reqCtx.res.text();
    const threshold = options.threshold || 1024;

    if (body.length > threshold) {
      const compressedBody = await compresssBody(body);
      const compressedRes = new Response(compressedBody, reqCtx.res);
      compressedRes.headers.set('Content-Encoding', 'gzip');
      reqCtx.res = compressedRes;
    }
  };
};

const app = new Router();

// Use custom middleware globally
app.use(compress({ threshold: 500 }));

app.get('/data', async () => {
  return {
    message: 'Large response data',
    data: new Array(100).fill('content'),
  };
});

app.get('/small', async () => {
  return { message: 'Small response' };
});

export const handler = async (event: unknown, context: Context) => {
  return await app.resolve(event, context);
};

In this example we have a middleware that acts only in the post-processing stage as all the logic occurs after the next function has been called. This is so as to ensure that the handler has run and we have access to request body.

Avoiding destructuring pitfalls

Critical: Never destructure the response object

When writing middleware, always access the response through reqCtx.res rather than destructuring { res } from the request context. Destructuring captures a reference to the original response object, which becomes stale when middleware replaces the response.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
import type { Context } from 'aws-lambda';

const app = new Router();

// ❌ WRONG: Using destructuring captures a reference to the original response
const badMiddleware: Middleware = async (params, { res }, next) => {
  res.headers.set('X-Before', 'Before');
  await next();
  // This header will NOT be added because 'res' is a stale reference
  res.headers.set('X-After', 'After');
};

// ✅ CORRECT: Always access response through reqCtx
const goodMiddleware: Middleware = async (params, reqCtx, next) => {
  reqCtx.res.headers.set('X-Before', 'Before');
  await next();
  // This header WILL be added because we get the current response
  reqCtx.res.headers.set('X-After', 'After');
};

app.use(goodMiddleware);

app.get('/test', async () => {
  return { message: 'Hello World!' };
});

export const handler = async (event: unknown, context: Context) => {
  return await app.resolve(event, context);
};

During the middleware execution chain, the response object (reqCtx.res) can be replaced by other middleware or the route handler. When you destructure the request context, you capture a reference to the response object as it existed at that moment, not the current response.

Composing middleware

You can create reusable middleware stacks by using the composeMiddleware function to combine multiple middleware into a single middleware function. This is useful for creating standardized middleware combinations that can be shared across different routes or applications.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
import {
  composeMiddleware,
  Router,
} from '@aws-lambda-powertools/event-handler/experimental-rest';
import type { Middleware } from '@aws-lambda-powertools/event-handler/types';
import { Logger } from '@aws-lambda-powertools/logger';
import type { Context } from 'aws-lambda';

const logger = new Logger();

// Individual middleware functions
const logging: Middleware = async (params, reqCtx, next) => {
  logger.info(`Request: ${reqCtx.request.method} ${reqCtx.request.url}`);
  await next();
  logger.info(`Response: ${reqCtx.res.status}`);
};

const cors: Middleware = async (params, reqCtx, next) => {
  await next();
  reqCtx.res.headers.set('Access-Control-Allow-Origin', '*');
  reqCtx.res.headers.set(
    'Access-Control-Allow-Methods',
    'GET, POST, PUT, DELETE'
  );
};

const rateLimit: Middleware = async (params, reqCtx, next) => {
  // Rate limiting logic would go here
  reqCtx.res.headers.set('X-RateLimit-Limit', '100');
  await next();
};

// Compose middleware stack for all requests
const apiMiddleware = composeMiddleware([logging, cors, rateLimit]);

const app = new Router();

// Use composed middleware globally
app.use(apiMiddleware);

app.get('/todos', async () => {
  const todos = await getAllTodos();
  return { todos };
});

app.post('/todos', async (params, { request }) => {
  const body = await request.json();
  const todo = await putTodo(body);
  return todo;
});

export const handler = async (event: unknown, context: Context) => {
  return await app.resolve(event, context);
};

The composeMiddleware function maintains the same execution order as if you had applied the middleware individually, following the onion pattern where middleware execute in order during pre-processing and in reverse order during post-processing.

Composition order

Unlike traditional function composition which typically works right-to-left, composeMiddleware follows the convention used by most web frameworks and executes middleware left-to-right (first to last in the array). This means composeMiddleware([a, b, c]) executes middleware a first, then b, then c.

Being a good citizen

Middleware can add subtle improvements to request/response processing, but also add significant complexity if you're not careful.

Keep the following in mind when authoring middleware for Event Handler:

  • Call the next middleware. If you are not returning early by returning a Response object or JSON object, always ensure you call the next function.
  • Keep a lean scope. Focus on a single task per middleware to ease composability and maintenance.
  • Catch your own errors. Catch and handle known errors to your logic, unless you want to raise HTTP Errors, or propagate specific errors to the client.
  • Avoid destructuring the response object. As mentioned in the destructuring pitfalls section, always access the response through reqCtx.res rather than destructuring to avoid stale references.

Fine grained responses

You can use the Web API's Response object to have full control over the response. For example, you might want to add additional headers, cookies, or set a custom content type.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
import { Router } from '@aws-lambda-powertools/event-handler/experimental-rest';
import { Logger } from '@aws-lambda-powertools/logger';
import type { Context } from 'aws-lambda';

const logger = new Logger();
const app = new Router({ logger });

app.get('/todos', async () => {
  const todos = await getAllTodos();

  return new Response(JSON.stringify({ todos }), {
    status: 200,
    headers: {
      'Content-Type': 'application/json',
      'Cache-Control': 'max-age=300',
      'X-Custom-Header': 'custom-value',
    },
  });
});

app.post('/todos', async (params, reqCtx) => {
  const body = await reqCtx.request.json();
  const todo = await createTodo(body.title);

  return new Response(JSON.stringify(todo), {
    status: 201,
    headers: {
      Location: `/todos/${todo.id}`,
      'Content-Type': 'application/json',
    },
  });
});

export const handler = async (event: unknown, context: Context) => {
  return app.resolve(event, context);
};
1
2
3
4
5
6
7
8
9
{
  "statusCode": 201,
  "body": "{\"id\":\"123\",\"title\":\"Learn TypeScript\"}",
  "headers": {
    "Content-Type": "application/json",
    "Location": "/todos/123"
  },
  "isBase64Encoded": false
}

Response streaming

Coming soon

Please open an issue if you would like us to prioritize this feature.

Compress

You can compress with gzip and base64 encode your responses via the compress parameter. You have the option to pass the compress parameter when working with a specific route or setting the correct Accept-Encoding header in the Response object.

Coming soon

Please open an issue if you would like us to prioritize this feature.

Binary responses

Using API Gateway?

Amazon API Gateway does not support */* binary media type when CORS is also configured. This feature requires API Gateway to configure binary media types, see our sample infrastructure for reference.

For convenience, we automatically base64 encode binary responses. You can also use it in combination with the compress parameter if your client supports gzip.

Like the compress feature, the client must send the Accept header with the correct media type.

Tip

Lambda Function URLs handle binary media types automatically.

Coming soon

Please open an issue if you would like us to prioritize this feature.

Debug mode

You can enable debug mode via the POWERTOOLS_DEV environment variable.

This will enable full stack traces errors in the response, log request and responses, and set CORS in development mode.

Coming soon

Please open an issue if you would like us to prioritize this feature.

OpenAPI

When you enable Data Validation, we use a combination of Zod and JSON Schemas to add constraints to your API's parameters.

In OpenAPI documentation tools like SwaggerUI, these annotations become readable descriptions, offering a self-explanatory API interface. This reduces boilerplate code while improving functionality and enabling auto-documentation.

Coming soon

Please open an issue if you would like us to prioritize this feature.

Split routers

As you grow the number of routes a given Lambda function should handle, it is natural to either break into smaller Lambda functions, or split routes into separate files to ease maintenance - that's where the split Router feature is useful.

Coming soon

Please open an issue if you would like us to prioritize this feature.

Considerations

This utility is optimized for AWS Lambda computing model and prioritizes fast startup, minimal feature set, and quick onboarding for triggers supported by Lambda.

Event Handler naturally leads to a single Lambda function handling multiple routes for a given service, which can be eventually broken into multiple functions.

Both single (monolithic) and multiple functions (micro) offer different set of trade-offs worth knowing.

TL;DR;

Start with a monolithic function, add additional functions with new handlers, and possibly break into micro functions if necessary.

Monolithic function

monolithic function

A monolithic function means that your final code artifact will be deployed to a single function. This is generally the best approach to start.

Benefits

  • Code reuse. It's easier to reason about your service, modularize it and reuse code as it grows. Eventually, it can be turned into a standalone library.
  • No custom tooling. Monolithic functions are treated just like normal Typescript packages; no upfront investment in tooling.
  • Faster deployment and debugging. Whether you use all-at-once, linear, or canary deployments, a monolithic function is a single deployable unit. IDEs like WebStorm and VSCode have tooling to quickly profile, visualize, and step through debug any Typescript package.

Downsides

  • Cold starts. Frequent deployments and/or high load can diminish the benefit of monolithic functions depending on your latency requirements, due to the Lambda scaling model. Always load test to find a pragmatic balance between customer experience and developer cognitive load.
  • Granular security permissions. The micro function approach enables you to use fine-grained permissions and access controls, separate external dependencies and code signing at the function level. Conversely, you could have multiple functions while duplicating the final code artifact in a monolithic approach. Regardless, least privilege can be applied to either approaches.
  • Higher risk per deployment. A misconfiguration or invalid import can cause disruption if not caught early in automated testing. Multiple functions can mitigate misconfigurations but they will still share the same code artifact. You can further minimize risks with multiple environments in your CI/CD pipeline.

Micro function

micro function

A micro function means that your final code artifact will be different to each function deployed. This is generally the approach to start if you're looking for fine-grain control and/or high load on certain parts of your service.

Benefits

  • Granular scaling. A micro function can benefit from the Lambda scaling model to scale differently depending on each part of your application. Concurrency controls and provisioned concurrency can also be used at a granular level for capacity management.
  • Discoverability. Micro functions are easier to visualize when using distributed tracing. Their high-level architectures can be self-explanatory, and complexity is highly visible — assuming each function is named after the business purpose it serves.
  • Package size. An independent function can be significantly smaller (KB vs MB) depending on the external dependencies it requires to perform its purpose. Conversely, a monolithic approach can benefit from Lambda Layers to optimize builds for external dependencies.

Downsides

  • Upfront investment. You need custom build tooling to bundle assets, including native bindings for runtime compatibility. Operations become more elaborate — you need to standardize tracing labels/annotations, structured logging, and metrics to pinpoint root causes.
  • Engineering discipline is necessary for both approaches. However, the micro-function approach requires further attention to consistency as the number of functions grow, just like any distributed system.
  • Harder to share code. Shared code must be carefully evaluated to avoid unnecessary deployments when this code changes. Equally, if shared code isn't a library, your development, building, deployment tooling need to accommodate the distinct layout.
  • Slower safe deployments. Safely deploying multiple functions require coordination — AWS CodeDeploy deploys and verifies each function sequentially. This increases lead time substantially (minutes to hours) depending on the deployment strategy you choose. You can mitigate it by selectively enabling it in prod-like environments only, and where the risk profile is applicable. Automated testing, operational and security reviews are essential to stability in either approaches.

Testing your code

Coming soon

Please open an issue if you would like us to prioritize this section.