With AWS Lambda, you upload your code and run it without thinking about servers. Many customers enjoy the way this works, but if you’ve invested in container tooling for your development workflows, it’s not easy to use the same approach to build applications using Lambda.
To help you with that, you can now package and deploy Lambda functions as container images of up to 10 GB in size. In this way, you can also easily build and deploy larger workloads that rely on sizable dependencies, such as machine learning or data intensive workloads. Just like functions packaged as ZIP archives, functions deployed as container images benefit from the same operational simplicity, automatic scaling, high availability, and native integrations with many services.
We are providing base images for all the supported Lambda runtimes (Python, Node.js, Java, .NET, Go, Ruby) so that you can easily add your code and dependencies. We also have base images for custom runtimes based on Amazon Linux that you can extend to include your own runtime implementing the Lambda Runtime API.
You can deploy your own arbitrary base images to Lambda, for example images based on Alpine or Debian Linux. To work with Lambda, these images must implement the Lambda Runtime API. To make it easier to build your own base images, we are releasing Lambda Runtime Interface Clients implementing the Runtime API for all supported runtimes. These implementations are available via native package managers, so that you can easily pick them up in your images, and are being shared with the community using an open source license.
We are also releasing as open source a Lambda Runtime Interface Emulator that enables you to perform local testing of the container image and check that it will run when deployed to Lambda. The Lambda Runtime Interface Emulator is included in all AWS-provided base images and can be used with arbitrary images as well.
Your container images can also use the Lambda Extensions API to integrate monitoring, security and other tools with the Lambda execution environment.
To deploy a container image, you select one from an Amazon Elastic Container Registry repository. Let’s see how this works in practice with a couple of examples, first using an AWS-provided image for Node.js, and then building a custom image for Python.
Using the AWS-Provided Base Image for Node.js
Here’s the code (
app.js) for a simple Node.js Lambda function generating a PDF file using the PDFKit module. Each time it is invoked, it creates a new mail containing random data generated by the faker.js module. The output of the function is using the syntax of the Amazon API Gateway to return the PDF file.
npm to initialize the package and add the three dependencies I need in the
package.json file. In this way, I also create the
package-lock.json file. I am going to add it to the container image to have a more predictable result.
Now, I create a
Dockerfile to create the container image for my Lambda function, starting from the AWS provided base image for the
The Dockerfile is adding the source code (
app.js) and the files describing the package and the dependencies (
package-lock.json) to the base image. Then, I run
npm to install the dependencies. I set the
CMD to the function handler, but this could also be done later as a parameter override when configuring the Lambda function.
I use the Docker CLI to build the
random-letter container image locally:
To check if this is working, I start the container image locally using the Lambda Runtime Interface Emulator:
Now, I test a function invocation with cURL. Here, I am passing an empty JSON payload.
If there are errors, I can fix them locally. When it works, I move to the next step.
To upload the container image, I create a new ECR repository in my account and tag the local image to push it to ECR. To help me identify software vulnerabilities in my container images, I enable ECR image scanning.
In the Lambda console, I click on Create function. I select Container image, give the function a name, and then Browse images to look for the right image in my ECR repositories.
After I select the repository, I use the
latest image I uploaded. When I select the image, the Lambda is translating that to the underlying image digest (on the right of the tag in the image below). You can see the digest of your images locally with the
docker images --digests command. In this way, the function is using the same image even if the
latest tag is passed to a newer one, and you are protected from unintentional deployments. You can update the image to use in the function code. Updating the function configuration has no impact on the image used, even if the tag was reassigned to another image in the meantime.
Optionally, I can override some of the container image values. I am not doing this now, but in this way I can create images that can be used for different functions, for example by overriding the function handler in the
I leave all other options to their default and select Create function.
When creating or updating the code of a function, the Lambda platform optimizes new and updated container images to prepare them to receive invocations. This optimization takes a few seconds or minutes, depending on the size of the image. After that, the function is ready to be invoked. I test the function in the console.
It’s working! Now let’s add the API Gateway as trigger. I select Add Trigger and add the API Gateway using an HTTP API. For simplicity, I leave the authentication of the API open.
Now, I click on the API endpoint a few times and download a few random mails.
It works as expected! Here are a few of the PDF files that are generated with random data from the faker.js module.
Building a Custom Image for Python
Sometimes you need to use your custom container images, for example to follow your company guidelines or to use a runtime version that we don’t support.
In this case, I want to build an image to use Python 3.9. The code (
app.py) of my function is very simple, I just want to say hello and the version of Python that is being used.
As I mentioned before, we are sharing with you open source implementations of the Lambda Runtime Interface Clients (which implement the Runtime API) for all the supported runtimes. In this case, I start with a Python image based on Alpine Linux. Then, I add the Lambda Runtime Interface Client for Python (link coming soon) to the image. Here’s the
The Dockerfile this time is more articulated, building the final image in three stages, following the Docker best practices of multi-stage builds. You can use this three-stage approach to build your own custom images:
- Stage 1 is building the base image with the runtime, Python 3.9 in this case, plus GCC that we use to compile and link dependencies in stage 2.
- Stage 2 is installing the Lambda Runtime Interface Client and building function and dependencies.
- Stage 3 is creating the final image adding the output from stage 2 to the base image built in stage 1. Here I am also adding the Lambda Runtime Interface Emulator, but this is optional, see below.
I create the
entry.sh script below to use it as
ENTRYPOINT. It executes the Lambda Runtime Interface Client for Python. If the execution is local, the Runtime Interface Client is wrapped by the Lambda Runtime Interface Emulator.
Now, I can use the Lambda Runtime Interface Emulator to check locally if the function and the container image are working correctly:
Not Including the Lambda Runtime Interface Emulator in the Container Image
It’s optional to add the Lambda Runtime Interface Emulator to a custom container image. If I don’t include it, I can test locally by installing the Lambda Runtime Interface Emulator in my local machine following these steps:
- In Stage 3 of the Dockerfile, I remove the commands copying the Lambda Runtime Interface Emulator (
aws-lambda-rie) and the
entry.shscript. I don’t need the
entry.shscript in this case.
- I use this
ENTRYPOINTto start by default the Lambda Runtime Interface Client:
ENTRYPOINT [ "/usr/local/bin/python", “-m”, “awslambdaric” ]
- I run these commands to install the Lambda Runtime Interface Emulator in my local machine, for example under
When the Lambda Runtime Interface Emulator is installed on my local machine, I can mount it when starting the container. The command to start the container locally now is (assuming the Lambda Runtime Interface Emulator is at
Testing the Custom Image for Python
Either way, when the container is running locally, I can test a function invocation with cURL:
The output is what I am expecting!
"Hello from AWS Lambda using Python3.9.0 (default, Oct 22 2020, 05:03:39) n[GCC 9.3.0]!"
I push the image to ECR and create the function as before. Here’s my test in the console:
My custom container image based on Alpine is running Python 3.9 on Lambda!
You can use container images to deploy your Lambda functions today in US East (N. Virginia), US East (Ohio), US West (Oregon), Asia Pacific (Tokyo), Asia Pacific (Singapore), Europe (Ireland), Europe (Frankfurt), South America (São Paulo). We are working to add support in more Regions soon. The container image support is offered in addition to ZIP archives and we will continue to support the ZIP packaging format.
You can use container image support in AWS Lambda with the console, AWS Command Line Interface (CLI), AWS SDKs, AWS Serverless Application Model, and solutions from AWS Partners, including Aqua Security, Datadog, Epsagon, HashiCorp Terraform, Honeycomb, Lumigo, Pulumi, Stackery, Sumo Logic, and Thundra.
This new capability opens up new scenarios, simplifies the integration with your development pipeline, and makes it easier to use custom images and your favorite programming platforms to build serverless applications.