Moving from Monolith to Microservices with Amazon ECS
I developed a tutorial on how to containerize a monolithic application, break it up into various microservices, and deploy them using Amazon Elastic Container Service:
You can find the full tutorial code and walkthrough as an official AWS project.
You can also find my sample code for this project on Github.
Hey everyone! My name is Nathan Peck, and I’m a developer advocate for EC2 Container Service.
In this video I will show how you can use EC2 Container Service to break a monolithic application into microservices. We will start out by running a monolith in ECS, then we will deploy new microservices side by side with the monolith, and divert traffic over to our new microservices with zero downtime. To start you might be wondering what is a monolith vs microservices? and why might we want to migrate from one to the other?
A monolith is an application that is a single unit of deployment that handles multiple types of business capability, all tightly coupled. On the other hand microservices take each core business capability and deploy it as its own separate unit which performs that single function.
Monolithic vs microservice design has benefits and drawbacks at different stages of a software product’s lifecycle, but in general it is often easier to develop a monolith for a brand new project that isn’t fully fleshed out, but as the software becomes more full featured microservices start to shine as a way to organize business logic in a way that prevents the system from falling apart under its own weight.
Many companies go through this process of recognizing that their core code base is getting excessively complex, and they are having issues adding new features or extending existing features. So they realize that they need to split some of the functionality out into its own service. But this needs to be handled very carefully, especially if you have customers using your application and want to do it without interrupting them.
So to demonstrate how to do this type of migration, lets look at an example monolithic application. This application is a small REST API for a forum application.
As you can see in this code this application is serving three different RESTful resource types: users, posts, and threads. This is a very typical setup for a monolith, one codebase is handling all three types of requests for all three features. So first let’s verify that this application works if I run it on my local machine.
We see a message that says that the server is ready. Now let’s make a few requests to make sure the server responds.
curl localhost:3000/api/users curl localhost:3000/api/posts
Alright so it looks like this app server is functional. But right now it just runs locally. We need to package it up for deployment, and that’s where docker comes in. Here is a dockerfile that I have previously prepared for this application. You can see that it is fairly simple. It starts from a base image that containers my specified version of Node.js. Then it has a few commands to copy my application code into the container and install external dependencies. I can use this dockerfile to construct a docker container image by using the “docker build” command:
docker images docker build -t api . docker images
So now that I have a docker container locally, I can run this container and it will run my application:
docker run -d -p 3000:3000 api
This command is telling docker that I want to run the container image detached, I want it to receive any traffic that I send to port 3000 on my local host. After I launch the conatiner my application is once again running on my local machine, but this time it is running in a docker container. I can still access it just like I did when it was running directly on my host machine though:
curl localhost:3000/api/users curl localhost:3000/api/posts
The next step is to get this application container running in the cloud. To do this I need to upload the image to AWS so that it can be downloaded onto EC2 hosts that will run it.
First I am going to create an EC2 container registry in the AWS dashboard. This registry will serve as a centralized place for all my container images. Each time I modify the application I can build a container image to capture a snapshot of the entire application environment and upload it to the registry.
<create an API repo in the dashboard>
Now that my repository is created, I need to give my local machine a login so it can upload to this repository.
`aws ecr get-login --no-include-email --region us-west-1`
And now I can tag the container image I built, and upload it to the repository that I created.
docker tag api:latest 209640446841.dkr.ecr.us-west-1.amazonaws.com/api:1 docker push 209640446841.dkr.ecr.us-west-1.amazonaws.com/api:1
Now that my image is stored in the repository it can be pulled back down and run wherever I need to run it, including in Amazon EC2 Container Service.
In order to run the container in EC2 Container Service I have to do a little bit of setup though. I’ve prepared a CloudFormation template to setup a fresh VPC, a cluster of docker hosts, and a load balancer.
I’m going to launch that stack now by using a console command.
cd ../../infrastructure aws cloudformation deploy --template-file ecs.yml --region us-west-1 --stack-name ecs-cluster --capabilities CAPABILITY_NAMED_IAM
This will take a few minutes while it automatically creates a lot of different resources on my AWS account. So while I wait I can go ahead and create a task definition for my application. A task definition is simply a list of configuration settings for how to run my docker container. This is where I tell ECS what container image to run, how much CPU and memory the container needs, what ports it listens for web traffic on, among other things.
I am going to use the console to create a task definition for the container image that I uploaded.
<show console creation of a task definition>
Now that the task definition is created I can launch this task definition is a service. This is basically a way to tell ECS “run or more copies of this container at all times, and connect the running containers to a load balancer”. After a minute the new service is up and running. I can see that there are two running tasks, which represent two running copies of our monolith container.
Our running containers have been connected to a load balancer. If I go to the CloudFormation stack that I launched earlier I can locate the URL of the load balancer, and make a request to the address of the load balancer to verify that the service is up and running. So at this point I have my classic style monolithic application up and running in EC2 Container Service. But I’m not done yet. My goal is to take this monolith and split it up into microservices. So let’s take a look at what the code for microservices might look like.
If we look back at the base code for the monolith you can see that the monolith services HTTP routes relating to “users”, “threads” and “posts”. A sensible way to split this application up into microservices would be to create three microservices: one for users, one for threads, and one for posts.
And here is what that code might look like. As you can see it is very similar to the monolithic code, but instead of serving all the different types of RESTful routes, it only serves HTTP routes that relate to one type of resources. So what I’m going to do is repeat the steps that I did to deploy the monolithic application, but instead I’ll build and deploy three microservices that will run in parallel with the monolith. First up I create three new repositories for the three services
aws ecr create-repository --repository-name users --region us-west-1 aws ecr create-repository --repository-name posts --region us-west-1 aws ecr create-repository --repository-name threads --region us-west-1
Now I will build and push each container image:
docker build -t posts ./posts docker tag posts:latest 209640446841.dkr.ecr.us-west-1.amazonaws.com/posts:1 docker push 209640446841.dkr.ecr.us-west-1.amazonaws.com/posts:1 docker build -t posts ./users docker tag posts:latest 209640446841.dkr.ecr.us-west-1.amazonaws.com/users:1 docker push 209640446841.dkr.ecr.us-west-1.amazonaws.com/users:1 docker build -t posts ./threads docker tag posts:latest 209640446841.dkr.ecr.us-west-1.amazonaws.com/threads:1 docker push 209640446841.dkr.ecr.us-west-1.amazonaws.com/threads:1
And now I create a task definition for each of these repositories, and turn each of those task definitions into a running service.
As you can see I configure the load balancer for each service to bind to a sub-path of the RESTful route that is specific to that service. And once again I see each service launch with a couple tasks. And if I make some web requests to the load balancer, I can still get the exact same responses that I did before.
<demonstrate some curl commands>
But what is happening behind the scenes is much different. If I view the listener rules for this load balancer I can see four rules. There is a default rule which sends traffic to my monolith, but above that there are three other rules which divert specific paths to my microservices.
So based on the priority of these microservice rules all the traffic that matches these paths is being sent to the microservices, instead of the monolith. I can actually shut down the monolith without impacting service availability.
So that’s what I’m going to do right now.
<shut down the api service>
And as you can see I can still send traffic to my load balancer, and all of the same paths that used to work actually still work just like they did before. I have successfully migrated all my traffic over from a monolith to microservices without downtime or a single dropped request.
This same approach can be applied to any application behind an ALB, but it works especially well in conjunction with EC2 Container Service because of the automated configuration of your target groups as you deploy the new services.
Thanks a lot for watching this demo and I wish you the best of luck in your own microservices adventures on AWS!