Nathan Peck
Nathan Peck
Senior Developer Advocate for Container Services at Amazon Web Services
Aug 6, 2022 26 min read

Your first Node.js container on AWS

You are a Node.js developer. You have written Node code and can run your application on your local developer machine, but you aren’t really that familiar with containers and container orchestration. This guide will walk you through the concepts you should know about building and deploying your first containerized Node.js application.

Why use a container for your Node.js application?

Node.js has a regular release cycle and new versions of the language come out all the time, often with new features. Additionally, packages in the NPM ecosystem are updated on a regular basis. You have probably run into one or all of the following scenarios:

  • Your coworker complains that the app crashes on startup. You realize that they forgot to run npm install to get the latest packages, so they were attempting to run the application against a different set of NPM packages than you were.
  • The app crashes in production. You realize that the production server has an older version of the Node.js runtime on it than you had locally on your developer machine.
  • You want to update a Node.js service to a new version of Node, but you want to keep another service on an older version for now. You have to setup .nvmrc files and a complicated nvm use && npm start hack so that different apps can run on different versions of Node at the same time on the same machine.
  • You don’t want to vendor packages off of NPM into your own git repo, but you are worried that running npm install might fail when you attempt to deploy your application. You want to remove that deploy time dependency on NPM being available, so you are considering running your own self hosted NPM registry mirror.

All these problems can be avoided when you use containers for packaging and distributing your application. The purpose of a container is to put together an artifact called a “container image”. This artifact holds a specific version of the Node.js runtime, your node_modules folder, and your application code. The container image has everything you need to run your application, so you can ship it to any machine and it can be unpacked there to reliably run on that machine.

Install local tooling for working with containers

For my own development environment I installed Podman for building and running container images, and Docker Compose for launching and managing the lifecycle of those containers.

On Mac OS X I use Homebrew to manage software packages. With Homebrew I can install and setup both container tools by running:

brew install podman
brew install docker-compose
brew install awscli
brew install docker-credential-helper-ecr
sudo /usr/local/Cellar/podman/4.1.0/bin/podman-mac-helper install
ln -s /usr/local/bin/podman /usr/local/bin/docker
podman machine init
podman machine start

These commands install both tools, configure Podman to use the same socket that Docker would normally setup on Mac OS X, and start the QEMU based VM that will be used for running Linux containers on Mac.

To verify that everything is working properly you can run the following command and you should see the output OK:

curl -H "Content-Type: application/json" --unix-socket /var/run/docker.sock http://localhost/_ping


Your first Dockerfile

The first step to turning your Node.js application into a container image is to create a Dockerfile. This files defines the commands for how to collect all the dependencies for the application into the container image.

This is the Dockerfile that I use for all my Node.js applications:

FROM AS build
ADD package.json .
RUN npm install

RUN apt-get update && apt-get install -y \
  curl \
  --no-install-recommends \
  && rm -rf /var/lib/apt/lists/* && apt-get clean
COPY --from=build /srv .
ADD . .
CMD ["node", "index.js"]

There are a few things going on here. First you will see the FROM keyword. This keyword says that we are starting from an existing base image. In this case I am using two different prebuilt images as starting points:

  • - This is a full developer environment Node.js image that includes NPM and a compiler for building any native code bindings in modules.
  • - This is a Node.js image that is stripped down to just the Node runtime. I use this for shipping to production because I don’t need the full NPM package manager and compiler tooling in production.

These two container images are built and maintained by the Node.js Docker team, so they are regularly updated with the latest Node.js patches. I am pulling them down off the official mirror on AWS Elastic Container Registry, and using them locally on my machine.

The Dockerfile has two stages. Each stage starts from a base image, and then supplies commands to run on top of that base image.

The first stage grabs the package.json file from my developer machine, and adds it into the full Node.js base image. Then it runs the npm install command in the directory /srv. The result of this stage is a node_modules folder at /srv/node_modules inside of an image named build.

The second stage starts from the slimmed down version of the Node distribution. This is going to be a production image so it installs any security updates to the operating system and then cleans up after itself. Then it copies in the node_modules folder from the first stage, and copies in my application code off of my developer machine. Finally it defines a port for the application and the command to run when I am ready to start this image.

There is one more file we need: .dockerignore


This file uses the same syntax as a .gitignore file. It allows you specify which paths you do not want to copy off of your development machine into the container.

Specifically for Node.js workloads we are adding the node_modules folder to the .dockerignore because we do not want to copy any NPM modules off of your dev machine into the Docker image. Instead we will always build and install the modules using NPM inside of the container itself. This is very important because there may be an architecture mismatch between your dev machine and your production environment. You don’t want to accidentally have arm64 modules on your Apple M1 based MacBook and then copy them into a Docker image targeted at an amd64 compute environment.

Using the Dockerfile in your development process

Now that you have a Dockerfile it is time to start using it. The main advantage of using a container image is that you can use your Dockerfile as you build and test your application, and get more reliable results.

Normally if you are developing a Node.js application you would be running commands like npm start and npm test as you develop. With a container based process we will no longer be using the NPM command line directly during development. NPM will only be used during the build process, within the Dockerfile itself, running inside the build stage of the container.

Instead of using NPM as the entrypoint to interacting with your code, we will use Docker commands to launch and interact with the application. There are two pieces of local tooling that make this easier:

  • Docker Compose - This makes it easy to define multiple containers to run locally, such as your application, and any database that it might depend on, such as Redis, MongoDB, or PostgreSQL.
  • Make - This classic utility lets you build easy to use command line shortcuts for more complicated developer workflows.

Docker Compose

Here is an example docker-compose.yml file that defines three containers:

  • A Redis key value store that the application will use
  • The application container itself
  • A test suite container that runs integration tests against the application
  # Launch the Redis used for syncing messages between copies of the client app
    image: redis
      - 6379:6379

  # The actual application
      - redis
    build: ./services/client
      REDIS_ENDPOINT: redis
      - 3000:3000

  # The test suite
      - app
    build: ./services/test-suite
      APP_URL: http://client:3000

When developing with containers the goal is to produce a minimal container image to ship to production, so I don’t want to bake my tests into the production image and ship the tests to production.

Instead, I code my integration tests as their own separate container which talks to the endpoint for my application container. The application container start up in the background, and then the test container starts and runs the integration tests against the application container.

This approach has two key benefits:

  • Because the integration tests run against the application container it means I can ship the application container that just passed tests to production, exactly as it is and be confident that it will work as intended in production. If I were to run the tests against the Node.js code directly, and then build the container for the code later on, I couldn’t have 100% confidence that what I shipped to production was the exact same thing that I just tested.
  • Because there is a container boundary between the application container and the test container it limits my test’s ability to “cheat”. If the test suite is too close to the underlying code it results in developers taking shortcuts using tooling like Sinon.js to monkey patch code during the tests to simulate various conditions. This is not ideal for integration tests. Instead integration tests should only manipulate the inputs and outputs of the container in order to trigger conditions that they want to test.


Here is a Makefile that I like to use with my containerized Node applications:

	docker-compose up

	docker-compose down

	docker-compose build app
	docker-compose up --no-deps -d app

	docker-compose build app
	docker-compose build test
	docker-compose run --no-deps -d app
	docker-compose run --no-deps test

Now with this Makefile I can use the following three commands to interact with my containerized application:

  • make up - Bring up the entire stack in the background: launch the Redis database, build and launch the Node container, and then run the tests
  • make down - Tear down the stack. This stops all the containers that are running in the background.
  • make build - Just rebuild and restart the application container in the background
  • make test - Rebuild and restart the application in the background and then run the integration tests against the container

Putting the pieces together and building our first container

Let’s see how these commands work. First I use make up to bring up the entire stack:

$ time make build
docker-compose build app
Sending build context to Docker daemon     704B
[1/2] STEP 1/4: FROM AS build
[1/2] STEP 2/4: WORKDIR /srv
--> Using cache 27820f6c61d7b60cdeef4d0a3cb0d8852f1f20b2c308065c202b033b785ec745
--> 27820f6c61d
[1/2] STEP 3/4: ADD package.json .
--> Using cache c74f17b37107706f652558dcf7800d1789dc78d5f448445d397c98a2cfbcf6ab
--> c74f17b3710
[1/2] STEP 4/4: RUN npm install
--> Using cache 400d3d06985055123e28b87276c548efe0c121a01156ad5984aa805d69d349a0
--> 400d3d06985
[2/2] STEP 1/6: FROM
[2/2] STEP 2/6: RUN apt-get update && apt-get install -y   curl   --no-install-recommends   && rm -rf /var/lib/apt/lists/* && apt-get clean
--> Using cache 1e911e326755c59297f6343f588f1600497697216480f72eae0022326e3b1088
--> 1e911e32675
[2/2] STEP 3/6: COPY --from=build /srv .
--> Using cache d3d776bf0b7ef3a86f42dae663dcd0028d2084f74a033abf9fc578bf29b5bb0a
--> d3d776bf0b7
[2/2] STEP 4/6: ADD . .
--> Using cache 9551174d61b0b8a5c565aa4d60e82d50395b81cfe842cfad53dcc21726d6b9ea
--> 9551174d61b
[2/2] STEP 5/6: EXPOSE 3000
--> Using cache 364209a27e89760535911987b6b9f93e02db4dc41df1b172f1a2bbca790cad69
--> 364209a27e8
[2/2] STEP 6/6: CMD ["node", "index.js"]
--> Using cache 001ce3bea8ed509fc5e2a360bfd36549f68e0d5e2d4d36282f2c0006b9db5f65
[2/2] COMMIT
--> 001ce3bea8e
Successfully tagged
Successfully built 001ce3bea8ed
Successfully tagged code_app
docker-compose up --no-deps -d app
[+] Running 1/0
 ⠿ Container code-app-1  Running                                              0.0s
make build  0.05s user 0.03s system 3% cpu 2.188 total

You will see Using cache throughout the output because nothing has been changed in the code yet. The container image builder is smart enough to reuse existing container image layers that were previously built. But watch what happens if I change something in my code:

$ time make build
docker-compose build app
Sending build context to Docker daemon     705B
[1/2] STEP 1/4: FROM AS build
[1/2] STEP 2/4: WORKDIR /srv
--> Using cache 27820f6c61d7b60cdeef4d0a3cb0d8852f1f20b2c308065c202b033b785ec745
--> 27820f6c61d
[1/2] STEP 3/4: ADD package.json .
--> Using cache c74f17b37107706f652558dcf7800d1789dc78d5f448445d397c98a2cfbcf6ab
--> c74f17b3710
[1/2] STEP 4/4: RUN npm install
--> Using cache 400d3d06985055123e28b87276c548efe0c121a01156ad5984aa805d69d349a0
--> 400d3d06985
[2/2] STEP 1/6: FROM
[2/2] STEP 2/6: RUN apt-get update && apt-get install -y   curl   --no-install-recommends   && rm -rf /var/lib/apt/lists/* && apt-get clean
--> Using cache 1e911e326755c59297f6343f588f1600497697216480f72eae0022326e3b1088
--> 1e911e32675
[2/2] STEP 3/6: COPY --from=build /srv .
--> Using cache d3d776bf0b7ef3a86f42dae663dcd0028d2084f74a033abf9fc578bf29b5bb0a
--> d3d776bf0b7
[2/2] STEP 4/6: ADD . .
--> 9c4c91e1b62
[2/2] STEP 5/6: EXPOSE 3000
--> 6694e152fd1
[2/2] STEP 6/6: CMD ["node", "index.js"]
[2/2] COMMIT
--> e8cb9948780
Successfully tagged
Successfully built e8cb9948780b
Successfully tagged code_app
docker-compose up --no-deps -d app
[+] Running 1/1
 ⠿ Container code-app-1  Started                                              0.6s
make build  0.05s user 0.03s system 2% cpu 2.885 total

This time STEP 5/6 and STEP 6/6 do not use the cache. These steps are rerun to add my code change to the image, and the resulting image has my code changes inside of it.

What if I change the package.json file? This time around it stops using the cache at STEP 3/4 of the build container, and runs the npm install again.

$ time make build
docker-compose build app
Sending build context to Docker daemon     728B
[1/2] STEP 1/4: FROM AS build
[1/2] STEP 2/4: WORKDIR /srv
--> Using cache 27820f6c61d7b60cdeef4d0a3cb0d8852f1f20b2c308065c202b033b785ec745
--> 27820f6c61d
[1/2] STEP 3/4: ADD package.json .
--> fcd97263f31
[1/2] STEP 4/4: RUN npm install

added 57 packages, and audited 58 packages in 3s

7 packages are looking for funding
  run `npm fund` for details
npm notice
npm notice New patch version of npm available! 8.12.1 -> 8.12.2
npm notice Changelog: 
npm notice Run `npm install -g npm@8.12.2` to update!
npm notice

found 0 vulnerabilities
--> e796181925c
[2/2] STEP 1/6: FROM
[2/2] STEP 2/6: RUN apt-get update && apt-get install -y   curl   --no-install-recommends   && rm -rf /var/lib/apt/lists/* && apt-get clean
--> Using cache 1e911e326755c59297f6343f588f1600497697216480f72eae0022326e3b1088
--> 1e911e32675
[2/2] STEP 3/6: COPY --from=build /srv .
--> 228f9d9501c
[2/2] STEP 4/6: ADD . .
--> e35edabbae0
[2/2] STEP 5/6: EXPOSE 3000
--> d3aa2b3f0dd
[2/2] STEP 6/6: CMD ["node", "index.js"]
[2/2] COMMIT
--> c0cadf93289
Successfully tagged
Successfully built c0cadf932897
Successfully tagged code_app
docker-compose up --no-deps -d app
[+] Running 1/1
 ⠿ Container code-app-1  Started                                              0.7s
make build  0.05s user 0.03s system 0% cpu 8.208 total

Because it took some time to run npm install the build takes a total of 8 seconds instead of 2 seconds, however the build process automatically ensured that the package that I added to package.json was installed. Similarly I can change the FROM statement at the start of the Dockerfile to request a different version of Node, and this would cause the build to dynamically download that version of Node and rerun all the steps to apply my application on top of that Node version.

Hopefully, this mini dive into the power of Dockerfile based builds helps you understand exactly why it is so valuable to make the container an integral part of your development process.

By defining the build as a Dockerfile you get reproducible, programmatic control over every aspect of the application environment. The Node.js runtime version and the package versions are no longer a separate thing that must be independently managed.

Node.js specific application changes

As you adopt containers it is important to consider a few specific Node.js specific patterns and how to adjust them for containers.

Managing Node processes

Node.js application code is effectively single threaded, and as a result it can’t make good use of more than one CPU core at a time. You may be using the Node cluster module to launch multiple child processes in order to make use of all the CPU cores on a server. With a containerized application this is not recommended.

Instead you should have a single Node.js process in each container. Let the container orchestrator, such as Amazon Elastic Container Service, manage the number of containers on each host.

This also goes for packages such as pm2, which can be used as a process manager for your Node.js applications. Instead of having PM2 launch the processes and restart them if they crash, just let the container orchestrator do that. The container orchestrator can do a better job because it will relaunch a fresh copy of the container based off the original container image, instead of just trying over and over to restart the application process inside of the same container.


If you are using a Node.js logging library like Winston you may have chosen one of its log transports such as logging to file. While this makes sense on a traditional VM based setup, it is not ideal in a container. The filesystem in a container is designed to be ephemeral, so you would not want to write logs to “disk” in the container. Instead you should change your application code to just log to stdout and stderr. Then the container’s logging driver can take care of the logs for you. This allows you to decouple the log delivery from your application code. By switching out the container logging driver you can easily switch between saving logs to disk, saving them into Amazon CloudWatch, or even directly into an S3 bucket.

Exit signal handling

While VM hosted processes are often designed to stay up for days at a time, containerized processes tend to have shorter lifespans. There are two key reasons for this. First, containers enabled you to run a larger number of smaller containers, and dynamically scale up and down more frequently. Second, containers give teams the confidence to ship changes to production more frequently.

Specific to Node.js it becomes very important to handle exit signals properly. By default many Node.js web servers ignore exit signals. You need code similar to this in your application:

const server = app.listen(port)

process.on('SIGTERM', () => {
  debug('SIGTERM signal received: closing HTTP server')
  server.close(() => {
    debug('HTTP server closed')

Check the full advanced docs for Express or your favorite Node.js web framework of choice for more examples on how to gracefully handle exit signals and shut down your Node process.

Deploying your Node.js container on AWS

Getting your Node.js container up and running on your local development machine is a great first step. Next you probably want to run it on a cloud server for customers or users to access. A container orchestrator is a key component of turning your application into a reality. I’ve written more about this in the article: Why should I use an orchestrator like Kubernetes, Amazon ECS, or Hashicorp Nomad?.

There are a variety of different options based on what pricing model is ideal for your application, and how complex it is.

Three key options that I recommend for Node.js applications:

  • Amazon Lightsail Containers - If you don’t have a super complex application, and just want an easy way to run a small Node.js application then Lightsail offers predictable, low cost container hosting in a simplified environment.
  • AWS App Runner - Ideal for mid range applications that scale up and down, but also have some periods of low activity. App Runner automatically scales out to multiple copies of your application based on the number of requests that your application receives, and it helps you save money by automatically reducing the price when there is low activity.
  • Amazon Elastic Container Service (with AWS Fargate) - Ideal for very large service deployments and advanced users that want to customize every aspect of how their containers run. Elastic Container Service not only manages the lifecycle and scaling of your application, but also helps you connect your containerized application to many other AWS services.

Another important piece of tooling is:

  • AWS Copilot CLI - This powerful command line tool helps automate building and releasing a container to production. It integrates with both AWS App Runner and AWS Fargate, so you can try out different compute options for running your containers.

For deploying my Node.js application I’m going to show how to use AWS App Runner. The thing I like about AWS App Runner is that it is a nice serverless midway point between AWS Lambda (which charges per request) and AWS Fargate (which charges a flat rate for running a task).

Authenticating with AWS

First you will want to install the AWS CLI and make sure you are logged into your AWS account on the command line. If you are then you should be able to run the following command:

$ aws sts get-caller-identity
    "UserId": "REDACTED",
    "Account": "REDACTED",
    "Arn": "arn:aws:iam::REDACTED:user/REDACTED"

Authenticating with Elastic Container Registry

Next we need to authenticate with Elastic Container Registry (ECR). Think of ECR as a package registry like NPM, but instead of storing a single package, it is storing an entire packaged up copy of your application as a container image. ECR has both public and private versions. We will be using the private version to ensure that you keep your application code secure.

Create a new registry:

$ aws ecr create-repository --repository-name nodejs-app-demo
    "repository": {
        "repositoryArn": "arn:aws:ecr:us-east-2:209640446841:repository/nodejs-app-demo",
        "registryId": "209640446841",
        "repositoryName": "nodejs-app-demo",
        "repositoryUri": "",
        "createdAt": "2022-07-01T14:58:25-04:00",
        "imageTagMutability": "MUTABLE",
        "imageScanningConfiguration": {
            "scanOnPush": false
        "encryptionConfiguration": {
            "encryptionType": "AES256"

Now we need to authenticate with that repository:

$ aws ecr get-login-password --region us-east-2 | docker login --username AWS --password-stdin

Login Succeeded!

Tag and push the contianer image to the registry

You can list the container images that you built previously:

$ docker ps
CONTAINER ID  IMAGE                                               COMMAND        CREATED         STATUS             PORTS                   NAMES
bcb87f1a5554                      redis-server   12 seconds ago  Up 12 seconds ago>6379/tcp  nodejs-apprunner-demo-redis-1
8e7512b4b877  node index.js  12 seconds ago  Up 12 seconds ago>3000/tcp  nodejs-apprunner-demo-app-1

Under the IMAGE column you can see the current name of the image that was built:

First we need to retag this image with the name of the registry we want to upload it to:

docker tag

Now we can verify that the image exists:

$ docker image list
REPOSITORY                                                    TAG         IMAGE ID      CREATED        SIZE                   latest      46a77eb6ddf6  3 minutes ago  262 MB  latest      46a77eb6ddf6  3 minutes ago  262 MB

We can see two entries because the same image has been tagged with two names.

Last but not least we upload the image to Amazon ECR:

docker push

Think of the image push like git push. But instead of saving a snapshot of your code into a git repository it is capturing a snapshot of your entire application container image into Amazon ECR.

Launching the application container in AWS App Runner

For deploying the application the first step is to navigate to the AWS App Runner console and click “Create service”. The first step is to locate your application container and setup App Runner’s ability to get that contianer and deploy it.



  1. Choose registry type of “Container Registry”
  2. Choose provider “Amazon ECR”
  3. Select “Browse” and then use the drop down to select your image and the tag that we pushed to earlier: latest

Deployment settings:

  1. Select deployment trigger “Automatic”
  2. Select “Create new service role”

At this point AWS App Runner is all setup to locate and deploy your container image, so click “Next”.

The next step is to configure your service settings.


Service settings:

  1. Enter a service name
  2. Choose how much CPU and memory you want for your application. I usually leave it at the default.
  3. For “Port” you need to choose which port your application expects to receive traffic on. For a Node.js Express application the default port is 3000, so you would configure the AWS App Runner port to 3000 as well.

There are some more settings you can configure on this page, but nothing necessary for the application to function, so go ahead and click “Next”.

Review and create:

This is your last chance to review the settings you entered.


If everything looks good scroll down and click “Create & deploy”. The initial deploy will take a few minutes while AWS App Runner sets up all the infrastructure. You will see a log of the actions that were taken, leading up to your deployment becoming available online.


Deploying an update to the service

For day to day usage you probably want to automate the build and release of a new version of the application code.

All this takes is a new section in the Makefile:

  docker build ./app -t
  docker push
  aws apprunner start-deployment --service-arn <copy and paste service ARN from the AWS App Runner console>

Now you can rebuild your service and deploy an update from the command line with make deploy. AWS App Runner will take over automatically pulling down latest copy of your application container and doing a zero downtime rolling update to your application.

Next Steps

At this point you have the basics setup:

  • You can containerize your application
  • You can build and push your container to a private registry on AWS
  • You can launch your container as a scalable, hosted web service in AWS App Runner

If you’d like to dig deeper into AWS App Runner and containers on AWS consider checking out:

If you have questions or comments please message me on Twitter.