Using AWS Copilot to Build CI/CD + Integration tests for your ECS app
I joined Containers from the Couch for a live show with Adam Keller and Paavan Mistry to talk about using AWS Copilot to setup a deploy pipeline that builds and releases a Docker container to AWS Fargate on git push. I also added integration tests to verify that the application is behaving as expected. This live recording and demo is a counterpart to my AWS container blog on the same topic.
Adam: Hey guys and as you can see we have a very special guest on the show today: Mr. Nathan Peck. Hello Nathan! Welcome!
Nathan: Hello everyone, I’m glad to be here!
Adam: It is a distinct honor and privilege to have you on the show so I’m looking forward to what we’re gonna discuss today. But for those of you who regularly watch the show as you can see this is day two without Brent. Hey… and he said hello so it’s always good just know that while Brent may not be here visually… Brent is here visually in chat.
Okay so we have ThaSmasha, welcome back! For all of our regulars please chime in and say hello it’s always good to see you!
Paavan: Yeah, yeah it’d be good to get to know where people are joining from as well like what city or wherever you are it’d be great. So welcome again to our regulars, my name is Paavan Mistry and yeah if you join here regularly we have Brent and Adam hosting the show. The last two days here I’m joining. We are glad to have Nathan back on our show so welcome Nathan. He’s a colleague, he works as a developer advocate as well. Nathan why don’t you introduce yourself and tell our viewers how long you’ve been with AWS, what’s your role, and what do you like like doing?
Nathan: Yeah sure! So I’m a senior developer advocate on the container team at AWS and I’ve been here about four years. Prior to joining AWS I worked at a couple different startups so I come from a startup background, I come from like the real get hands-on, do it yourself, and a very small team trying to accomplish big product ideas and build big things with a very small number of engineers. So I like to take that kind of like, builder mentality and bring it into AWS and teach people on the engineering team how to build better products that enable the folks who are building startups trying to build their product fast and get something out the door to their customers.
Adam: And you do! And you do! You know prior to to joining the team as a developer advocate myself I would always watch your blog posts and your tweets and it always just felt like you were you were my spirit animal, like everything I was just like, “yes let’s do that!” And so really… always just awesome content. Thank you so much!
Paavan: Absolutely yeah! A recent example I came across was I was looking at some content around Fargate networking and I came across your blog from I think a couple of years ago but it was so well written.
Nathan: Unfortunately I think that one is totally outdated now. Now we have all the great AWS VPC mode.. like I need to make a follow-up on that one.
Adam: We move fast!
Paavan: So yeah keep it coming, but we are glad to have you here and what are we going to talk about today?
Nathan: So today I want to show Copilot, and I know AWS Copilot has been mentioned on the show a couple times so some of the viewers may actually already be familiar with Copilot. But for those who don’t know about Copilot its this new abstraction tool we’re making for the command line to help you deploy containerized applications more easily. And what in specific I want to do is I want to take an application, deploy using Copilot and I want to start automating it.
So I want to automate the build and release cycle, and then I want to add an integration test to it so that way when we are developing new features and we discover a bug we’re able to test and make sure that those features are working properly before the application actually hits production.
Adam: And you know so know what I think, before we dive into the demo and go into these things I do think it would be cool to just talk about, what does that look like now. What’s previous to Copilot being here. And we’ve talked about Copilot, as you said, on the show. It’s a very opinionated tool right? And it does handle a lot of that boilerplate, and a lot of these resources that me as a developer, or frankly me as an operations person, who just wants to move fast… It helps eliminate a lot of that work.
Can you maybe talk through some of those things that it does to make life easier for the developer slash operations person?
Nathan: Sure! So there’s a few different stages toward building your application. Obviously there’s the source stage where you’re actually collecting all of your source code: all your libraries, your runtime, different things that your application might require in order to function. And then there’s the stage where you say: “Okay I want to get all this stuff to production and I want it to run there.” And most of the time people are using Docker images for that these days. If I look broadly across the industry a lot of people, when they develop their application, they’re putting into a Docker image. But interacting with Docker directly… it can be done, it’s totally something that you should learn… But it’s not something that I necessarily want to do day to day.
You know if I’m writing, I’m sitting there typing that
docker build command or
command directly… it feels like something that I shouldn’t have to do. And most of the time
in the past when I was building Docker images myself I would end up making a little shell script
that I call like
build.sh or something.
Adam: laughing Yeahhh.
Nathan: Yeah just automate it! That way I could just type
build. You know I don’t
necessarily want to type out a full
docker build && docker run command every time I want
to interact with Docker. So we started with that and said: “Okay how can we make a tool
which finds my Dockerfile, finds my application, builds it, and pushes it for me automatically:
with one command.”
And that way I don’t even have to write a shell script. This tool already exists, it already understands how to look through the folder, find my application, build it, and push it and then the second stage after that is when I have my Docker container: I need infrastructure to run it on. I need a cluster, I need VPC networking, all these different supporting resources on my AWS account.
And creating those things by hand… that’s not fun either. I don’t think anybody likes clicking around the console that much. You know it’s something that I can do but day to day especially if I’m deploying multiple applications I don’t want to go in there and start clicking around. Se we said: “Okay we can automate that as well. We can take that application, we can spin up the environment and supporting resources and put that application into the environment automatically as well.”
Adam: Just thinking about that if I were to to do this by hand right now you go to the console, I need to think about a vpc as you said. There’s networking security groups, IAM roles and policies, there’s a lot of components that I need to think about: these resources to ensure that I can connect all these pieces together. So by having something like Copilot which, by the way: ThaSmasha made a comment here: Today I’m not even going to formulate a question, just taking notes. Too advanced!
I think this is an example of the audience that Copilot should be reaching out to right? Hey I don’t want, or I don’t have the time to be a an expert in all of these things under under the covers. But I have an application, I have a Docker container… now just get this to an environment that’s stable, scalable, and production ready without me having to be an expert in all of the components around.
So I think going into this: This for you. This is for everyone from beginner to advanced users. I think you’ll see value in Copilot helping you get your applications to production with tests. Which I’m really excited about. I give a little spoiler alert, Nathan’s going to show us some really great integration tests into his application, and get that into the pipeline.
Chat message from Nethole: Water or vodka? No judging, because its been a long year.
Adam: Before we get into it: Yes Nethole it has been quite a year but as Paavan said he is just drinking water, not vodka. But we’ll see how the show progresses.. maybe it’ll turn to that.
Paavan: Yeah, yeah trust me.
Nathan: Only if this demo goes horribly.
Adam: Exactly, if the demo goes bad we’ll all be drinking vodka!
Chat message from Philip: Copilot > shell scripts?
Adam: And yes Phillip so I would say as Nathan was saying right? We look at Copilot… Shell scripts are great, you know shell scripts oftentimes we use them to glue the pieces that are missing but I think Copilot is going to definitely replace those shell scripts for the most part.
Paavan: Yeah I think to add to that Adam and Nathan, I think what we are seeing through these open source initiatives like CDK or with Copilot and the other projects which are out there which are open source is that’s a good comparison that Philip made around shell scripts. Like we are trying to automate and make sure that we are doing the necessary interesting pieces right? Rather than spending so much time on things which are not necessary to do to do the job. And that’s what these tools help us do so that’s what I’m excited about: they open up new ways of tackling interesting challenges. So in my early days I would build computers and sort of do things around hardware which wasn’t what I really was interested in. I was really interested in programming or anything like learning languages. So rather than doing that heavy lifting which is not useful for the business you might as well start using or learning these tools.
So Copilot’s new for me to be honest and I’m learning it through this session so I’m very excited yes.
Adam: Well said Paavan. Really well said. What do you say Nathan? We just get started and get into the demo. And I just really quick want to say we have some people from New York. We got California representing here. I think we have Edjgeek from Northern Colorado. Which for him I’m just gonna say “Lambda”, okay I said it!
And Argentina look at that! Okay it’s really cool. Finland, oh my goodness now they’re coming in. Finland, UK, Canada… I just want to make sure we give all the love here. Germany.. okay I can’t keep up now I think I’ve caused chaos here, okay so welcome thanks everybody for joining from around the world. We appreciate you. So nathan let’s get into a demo and let’s see the good stuff!
Nathan: Yeah I’m hoping once we actually see the demo some of these concepts we’re talking: all these big words around VPC, security groups, I think you’ll see that from the demo that a lot of these things are a lot easier when you see them in action.
So first let me bring up my IDE here and I’ll show what Copilot looks like. So first thing
I type in the
copilot command. Let me know if this is too small or not whether I need to
Adam: Yeah maybe a little… That’s alright.
Nathan: Okay so so you can see from this command line here that Copilot has a few different categories of commands here. There’s “getting started” which educates you on some of the basic concepts and opens the documentation. There’s “developing”, “releasing”, and then “add-ons” and “settings”.
I’ve already done some of the preparation work before here, because I know we’ve talked about Copilot a few times on the show so I’ve already developed an application and pushed it initially to the cloud.
And that application is a simple service for reversing a string. So in building this material I imagined that I was working for a company which they have this aim to be the ultimate string manipulation API on the internet. They said: “You know string manipulation is hard we want to provide an API that does all these different operations on strings.” And the first one they’re going to launch is reversing a string. So they created a little Node.js service here.
Very simple, I didn’t want to pull in like too many modules or stuff that would add complexity to the system, instead all it’s doing is creating a server. It’s reading the body of the request and it’s turning that body of the request into a string and reversing it. And then returning it back to the clients.
So I want to take this service and deploy it and I’ve used a couple of
copilot commands to actually do that. So if I type in
copilot app ls for example I can see
that I’ve deployed an app I’m calling “std” for standard. Just like there’s a
standard lib in many languages. So this is gonna be a standard lib of
string manipulation. So if I type in
copilot service show I can see that
I’ve deployed some services. And you can see the the list of services here.
So I’ve deployed a service called “reverse” service and that is this code
here for reversing a string.
And I’ve deployed it to two different environments: a test environment and a production environment. I have a url for the application so if I copy this url and I send a request to the service….
I’m going to
curl -d (that’s the flag for sending some data in the body of the
request) I say “hello” and I put the address of the service.
I get back a reversed version of that string that’s “o-l-l-e-h” there.
So you can do that a few times… and this is a live service so you can feel free to try this out as well. It’s at https://reverse.test.string.services and that’s that’s a live url there that reverses the string.
So Copilot has made this super easy. It has taken this Node.js code here, just a simple 20 line file and accompanying that Node.js I’ve got my Dockerfile. What this Dockerfile is doing is it’s installing some of my dependencies that the code needs and then it’s just packaging up my application along with the node image. So that means that my application will have Node and it will be able to run an environment.
So all this is super easy and I can can for example
release an update to my service. If I type
copilot service I see the list of
commands that Copilot offers me. And I see
copilot service init that’s
what I did to actually deploy the service initially. And
copilot service deploy
would allow me to redeploy my service. And you can see how
that works if I
copilot service deploy.
Chat from Philip: Is an environment a cluster in ECS?
Adam: Hey Nathan so a question came in the chat and it makes me think maybe we should talk a little bit about the concepts in Copilot CLI. So a question from Philip was: “Is an environment a cluster in ECS?” So maybe you want to describe what an environment represents in Copilot.
Nathan: Sure so that’s exactly right. A Copilot environment is basically a cluster
that comes with its own resources like VPC, Application Load Balancer, and similar.
It also comes with its own url namespace. So you can see this long url here
that is has developed. It’s got the service name in it, it’s got the environment name in it,
and it’s got my application name, and then my actual subdomain or domain that I own. Which is
string.services. So I so I bought this domain
string.services and then
I launched an environment called
test, an environment called
prod and these are
isolated environments. They each have their own networking stack. They’ve got
their own cluster and resources which in this case are Fargate, and they have their own
load balancer as well for each of these two different environments. And then I can launch
multiple different services in each environment.
And I’ll talk a little bit about why we have these multiple environments when I start getting into the pipeline but fundamentally the thing to understand is these are two different completely isolated places that I can deploy my application as I advance through the stages of development of a feature or a particular service.
Paavan: So in terms of ECS then Nathan are they two different clusters, two different environments all together?
Nathan: Yeah, two different clusters entirely. And I can actually show that if I go into the console here and bring up ECS… And there you go, you see two different services here. I’ve got my standard test cluster and my standard prod.
And you can see because I was actually running an update it’s running two tasks right now because it’s launching and deploying a new version of the task in parallel and the two different versions, version 14 and version 15, are currently running in parallel while version 15 stabilizes and then version 14 will be stopped. So these two different clusters are existing sort of side by side, and I also have another cluster for a different application here, but the two there are the standard test and standard prod.
Paavan: And you didn’t have to actually go into the console to create them right? It’s done by Copilot.
Nathan: Yep! Copilot automatically creates that. All I have to do is type in
copilot init and it finds my application, it creates the environment for me
automatically. And you’ll see as I interact with Copilot a little bit more
with the pipeline commands, you’ll see how interactive it is with automatically
finding these resources and creating things for me.
Chat question from hkhajglwale: Is this some sort of simulation of ECS cluster on local dev box?
Adam: So just another quick question to answer is is this some sort of simulation of an ECS cluster in a local dev box. No, but what Nathan’s showing us is from his workstation, his laptop he’s able to deploy entire environments with just a command right? A Dockerfile and you issue a command and Copilot handles the rest yeah?
Nathan: This is a live ECS cluster. I’m actually here on the us-east-2 console, so you’re seeing this 100% live and unfiltered.
Nathan: So obviously I showed some of the Copilot commands and you
can see that I can interact with Copilot from the command line
by typing these commands to release an application. But the thing I want to
point out is when I type in this
copilot service update or
service deploy here
and it starts running this deployment, this is an asynchronous operation but
it’s kind of blocking my terminal right here right?
If I was doing something else and I started doing the service deploy it’s doing this in the foreground you know. I have to actually wait for this service deployment to finish and obviously I could background this task with Control-Z or I can just open another terminal.
But it makes me think: “Okay maybe this Copilot service deployment isn’t something that
I want to sit here and wait on day-to-day.” Maybe I don’t actually want
to use Copilot from the command line like this. This was obviously a super
easy workflow to type
copilot service deploy and see it automatically
go through building my application and then pushing it and then releasing it
on my url like that. That was a great experience but it’s not something that
I even want to see day-to-day. I don’t want to have to wait on that.
So is there a way that we can automate this and make this happen in the background, and this is where I want to transition into the AWS builder library and talk about some of the ways that we do this at AWS.
Adam: Hey Nathan, I’m really sorry there’s just a lot of conversation happening in chat and I hate to interrupt you…
Nathan: Any time!
Adam: I just want to make sure before we transition, I want to start with Philip.
Chat from Philip: Was the ECR image created locally or in the cloud?
Adam: Philip asked the question was the ECR image created locally or in the cloud? So do you maybe want to talk through really quick how the application works within Copilot?
Nathan: Yeah so everything that you saw here in this in this output here, happened locally. Now there’s some caveats here. I’m technically running this from a machine in the cloud, you see down here I’m actually SSHed into this box down here. So I’m not actually running the build locally on my laptop. I like to do all my development on an EC2 instance that I actually connect my Visual Studio Code to.
But you could also do all of this locally so all these build commands happen wherever you’ve installed Copilot whether that is you installed Copilot on your local laptop or whether you installed it on a server in the cloud.
And so it built and then it pushed that to ECR and at that point that it pushed to ECR it pushed to this url so that’s now where the image has been hosted. So hopefully that clarifies the answer to that question there.
Adam: Yeah I think one thing with Copilot when you run your first time using
copilot init it basically initializes a skeleton structure for your
application and when you think in Copilot you have an application which
encompasses multiple services right? So your application will deploy a load
balanced service in this case which has a Docker image that needs an ECR
repository. This all happens at that initialization phase right Nathan?
Nathan: Yeah so to save some time I’ve skipped that initial initialization but you can see here I’ve brought up the manifest that was created for the service and you can see that it has specified a location for the Dockerfile, the port, and other different parameters of the application that’s being built.
So now all I have to do is… basically that was super easy for me all
I did was
copilot init and it selected and found those settings and now all I have
to do is type
copilot service deploy and it uses these settings that have
been predetermined in the manifest, to go through this process of rebuilding
the service, re-pushing it, and running the new version.
Paavan: So I have a question here Nathan. So when you click or when you type in
copilot init where does the manifest have to be in the same directory for
for it? Where does it need to be?
Nathan: Yeah so here’s my project folder: “aws-copilot-pipeline” is what I call
this particular project and I’ve ran my commands from the root of this
particular folder and inside this folder structure you can see how I’ve laid
out my application. I have my app inside of here, I have my tests specified
here (which we’ll get to later) and then I have this magic folder
copilot. Well this folder was created by Copilot and this is where
Copilot stores all the information about what application needs to be built
and pushed to the cloud.
And here you can see it’s made a folder called
reverse which is the name of
that application and inside of that is the manifest which is this file I have
open here, which is the settings for that application like how much CPU and memory.
Paavan: So when you say
copilot init then would
you specify the name of the application like
copilot init reverse?
Nathan: Uh-huh, well I like to just type
copilot init and then
it’ll pop up basically the wizard. It’ll ask me things like what
would you like to name my application, what type of application you
want to deploy (load balance web service in this case) and how
much memory, CPU. These are all things that then I can go into this
manifest and modify them after the fact.
Adam: Man there are so many good questions so I just want to run through these because I really think the the context of today is to really demo the CI/CD functionality so I’m going to run through a couple questions and then I want to get into that. But just a couple more.
Chat message from boxofninjas: Would you use Fargate for a Wordpress website. This type of orchestration looks awesome.
Adam: So like would you use Fargate for a Wordpress website? This type of orchestration looks awesome.
Nathan: So yes, you could definitely use Copilot to deploy a Wordpress installation. There are some caveats obviously with Fargate. It’s intended to be a stateless system and although there is ways to attach a stateful storage now…
Adam: nodding EFS
Nathan: I think that in general you would want to customize your Wordpress installation to be stateless as well, which is possible. So for example you can customize it so that way your uploads go into S3 and you can break your database out and run your database as a separate service, for example Amazon RDS or similar. Once you have done that work and done that configuration for your Wordpress site then you could totally, very easily, deploy your Wordpress container to Fargate. But I just want to say as a caveat like you could, but you wouldn’t want to do like a standard out-of-the-box Wordpress installation. You would need to customize your Wordpress configuration to have the right plugins installed and the right configuration to have your state for your actual posts and your uploads stored outside of Fargate in S3 or RDS.
Adam: And we did with Fargate and ECS, we did just recently release EFS support. I don’t believe at this moment Copilot supports that.
Nathan: Not technically, we have support for storage, so you’ll see that a command here, as options for storage: commands for working with storage and databases. Currently we’re focusing on like DynamoDB and S3, some of the ones that are a little bit easier to start with. And there’s support for creating your own add-ons for customizing your service. This will become more and more full-featured over time. We have a roadmap for that and we welcome you to create issues on the project if you have something in particular that you want to see added like EFS support.
We’re working on making more examples for that as well, so you have easier ways to attach more of the storage to your copilot tasks.
Adam: Cool, yeah and and I know that the development team for for Copilot, they’re easy to engage with they want your feedback.
Nathan: I’m pretty sure they’re in the chat right now so…
So Nathan please continue, I’m so sorry. You were on a roll and I had to interrupt you.
Nathan: No it’s all good, I love answering the questions, I think it’s very important.
So I want to talk about… you know we showed all these things being done by hand. I want to talk about automation, and how we do automation at AWS.
So there’s this great article by Claire Liguori and called Automating safe hands-off deployments and it talks… I’m not going to go through every aspect of this but I highly recommend (maybe drop the link in chat) looking at this article later, because it goes through how we think about production deployments at Amazon and the process that we go through. The important thing I want to highlight right now is the four pipeline phases: so you start with your source code, you build your source code, you test it, and then it reaches production.
And you’ll see I created two environments earlier right? In copilot I created a test environment, I created a prod environment. Well this is the reason why. It’s because I want to have these stages of collecting my source code, building my source code, deploying it to a test environment and then deploying it to a production environment.
And this goes through a lot of different ways that you can test all the way to really rigorous test environments. The test environment that I want to set up, in this case, is an integration test which is this first integration test discussed in this section right here.
A lot of times when I talk about testing a service I get a little bit of pushback or a little bit of worry from developers. I know some developers have a little bit of negative reaction to tests. They just say: “I want to develop features, and tests take a lot of time to develop. They hold me back, they’re always breaking.”
Well I think that tests can be done in a way which doesn’t slow you down and which benefits your application greatly, and I think that really starts with integration tests. Like if you’re working at a company right now that doesn’t have tests then probably the first test that you should start thinking about is an integration test.
And the reason why is that an integration test basically interacts with your service just like a real user would and so it offers the highest I think value to time proposition when it comes to developing tests. And generally let’s say you have a service and a user has to sign up and then sign in and then they take like let’s say a few different top level actions like let’s say if it was a online store searching for a product and then adding the product to cart and then checking out… well if you have an integration test where there’s a simulated agent which goes through those steps and just hits those top five or six actions that a user would take against your service well that’s going to already catch 90% of the major bugs that would impact users in production.
So I highly recommend integration tests and that’s the type of test that we’re going to be setting up in this pipeline very shortly.
Adam: Well said… test, test, test, very imporant.
Nathan: Heck yeah!
Adam: Test your code…
Nathan: Alright so I want to show the Copilot pipeline commands
copilot pipeline. And you’ll you’ll see a lot of the same
commands over and over again when I interact with copilot there’s always
copilot nit something like
So if we do
copilot pipeline show right now well there’s currently
no pipelines yet. We’re about to create one so I’m going to do
copilot pipeline init and we’re going to see how Copilot walks you through
the process interactively so it starts out saying “would you like to add an environment to
And “yes” or “no” well you know obviously I want an environment. So it asks me which
environment I would like to add. I select
test and it says: “Would you like
to add another environment to your pipeline?” Once again “yes” so I have two environments
here: the second environment is
Now the next thing it’s going to ask is what Github repository would I like to
use for this service. Now I’ve already pushed this code to Github
and the local git repository that I’m working in, it’s already picked
up the name and address of that git repository which is
so I’m just going to press enter to select that pre-existing Github.
And then it asks: “please enter your github personal access token for your repository”. Now this is where some people might get a little bit worried, like “okay what is this personal access token”. I want to show you it’s actually fairly simple to set up if I go to my Github.
Adam: Do you want me to hide your screen or anything?
Nathan: Oh no don’t worry, I’ve practiced this. I’m pretty sure I’m not going to leak my personal access token.
Adam: laughing Famous last words!
Nathan: laughing So I go to Github and I click “developer settings” and I click “personal access tokens” and what this is going to do is it’s going to create a token which allows Copilot and AWS to interact with my Github repo on my behalf and watch for updates on my git repo and then take actions like automatically redeploying my service.
So I click generate token and I have to select a couple scopes so I select “repo” scope I’m saying yes this has access to read the repo content so that way obviously it needs to read the code in order to build my application.
And I select admin, it’s admin repo hook, so when it creates a hook that’s what allows it to actually watch the contents of the repo and when I push a change to my repo it’ll pick that up and start taking action on it. Now I click generate token.
I’ve actually already have a token that I generated in the past so I’m just going to copy that pre-existing token out of my password manager rather than creating another token here live. And if I go back over here I can paste that token in and there we go!
So you see a bunch of output here describing what it did but the important thing is that it has actually created a pipeline manifest file and it’s created a build spec for what to do during that pipeline. I can open that up and inspect it.
Adam: So I just wanted to add like so Copilot for your first experience I recommend you don’t pass any parameters into your commands. Go through the interactive experience, answer the questions, see what the workflow looks like from the CLI perspective then once you’re comfortable and you’re ready to start issuing commands, just one click commands, then you can start passing these things that Nathan did with the Q&A section of the CLI you could just pass those in at as parameters to the Copilot CLI right?
Nathan: Yeah I hate trying to remember command line flags though so to be honest I always just go through the wizard.
Adam: It’s nice!
Nathan: If I have to remember what a flag is called and then like what to enter for the flag I’d rather it just ask me. Because it also prompts me automatically, like for example it says “is this your github repo?” and yeah it found the right Github repo, so I all have to just press enter.
Chat message from Phillip: The Copilot pipeline feature will save so much time! I have spent hours configuring Jenkins jobs for all services in each of my cluster.
Adam: That’s a great point and Philip said that the code pipeline feature will save so much time. I’ve spent hours configuring jenkins jobs for all services in each of my other clusters. Uh Philip I don’t know if you’ve heard on the show Brent and others call me the Jenkins guy. I don’t know how I got pigeon-holed there but anyways I feel your pain there. So this is definitely a huge time saver and you don’t have to worry about that anymore.
Nathan: Absolutely so just to show you compared to Jenkins how much easier it is to
configure the pipeline that Copilot creates… Here in this pipeline file, the
pipeline definition file… very basic. It just specifies the source of the pipeline
which is my repository here and the name of the secret github token that authorizes access
to that Github and then I just specify the list of stages. So the stage is the name of the
environment I want to deploy to. I want to deploy to test first and then I want to deploy
to prod. And what I’ve actually added here is test commands. This is where the
integration test is going to take place, so the test commands are first: installing some dependencies for the
test and then it’s specifying the application url that I want to test against, which is
the environment url for that test environment. And
npm test which kicks off the test.
I can show you what that test ends up looking like over here in the code. The test is super basic, as integration tests actually usually are. You can see that it’s just defining the environment url that I want to make a request against and then it’s making a web request and sending the body “hello” and that’s verifying that the response text equals the string reversed. So this is all I really need for integration tests for the service right? It’s just going to verify that I can send a string and that the string comes back properly reversed.
So at this point it’s created these files for the pipeline and Copilot gives me some helpful
ideas about what I need to do next. One of the most important things to do actually is push my update. My
git commit and then get pushed. And I’ve actually already committed this let me do
a git push that’s going to push the code up and then I can do a
copilot pipeline update.
So this is going to start actually creating the pipeline resource on the AWS side and I’ll show you what that looks like over here in the console. So if I switch back over, everything that Copilot does by the way is using Cloudformation under the covers so if I go to look at this I can see a long list of different Cloudformation stacks that have been created, and many of these have actually been created by Copilot itself.
So all these ones that are standard test, standard prod, these are all Copilot managed Cloudformation templates and the one that I just picked right here is the pipeline so this is actually creating a CodePipeline for my service.
If I go to CodePipeline I see a pipeline has been created. I click into that and I can see that it’s already beginning to take action. It’s on the first stage: the source stage of actually pulling my code down and kicking off the build.
Paavan: This is great, I mean as a user, I’m a developer I just want to develop an application and test it quickly. I didn’t have to learn the sort of basics or fundamentals around ECS clusters and how to bring them up and also CodePipeline so I don’t have to interact with the API of CodePipeline and it just does it for you. So as you just showed I have to stay in my VS Code and just focus on my app. This is really strong so I mean this is as I said I’m with all the viewers as well learning from you so really exciting!
Nathan: Yeah sure thing!
Adam: So there there’s just a couple questions here so one was is there some sort of provisioning of storage in the middle of all this so I think you answered that with CloudFormation. I will say Copilot also uses SSM to store just some key values related to your overall application but generally CloudFormation is that state management system for your environments. And then there was another question from Nethole: Do we have a list of additional permissions that are required to make use of Copilot and in this use case it’d be working in a restricted environment where they may not have admin access. And they don’t want to have to chase down all the different permissions they need.
Nathan: Yeah so we can take a look at that actually, so if I go to CloudFormation and I look at my pipeline stack and look at the resources I’m pretty sure we’re going to see some roles in here. So there’s a pipeline role and there’s a build project role and we can actually look at these im rules to see what permissions were utilized for the pipeline and for the build. So the first one I’m looking at here is the the pipeline role and if I expand this you can see the list of different resources that it needs to touch mostly related to CloudFormation, S3, IAM, CodeBuild, CodePipeline. And then keeping keep on going down KMS, S3, STS.
Adam: And there was a tweet I had tweeted not too long ago about about how to see how using the AWS CLI how to actually see what resources your particular user role or or resource is touching and interacting with so I’ll post that in here when I find it.
Chat message from texanraj: Would this work with EKS? Is this kinda like CDK for deploying EKS?
Adam: But anyway so one last question does this work with EKS? This is kind of like CDK for deploying EKS but no this specifically is for ECS and exclusively.
Nathan: Yeah totally, that’s exactly right: this is exclusively for for ECS at the moment. We’re designing it also exclusively at the beginning for Fargate so the whole point is that this will create a serverless container deployment for your application. I don’t have to think about any control plane resources because ECS is fully serverless. I don’t have to think about any of the hosting EC2 resources that are actually running the container because that’s fully serverless as well. Oh yeah and the most important thing is I don’t have to think about any of the pipeline or build resources either because that is also fully serverless. This whole thing is serverless end to end.
Like I remember back when I used to run Jenkins myself, and as Adam can testify running Jenkins was a pain. I actually had a build box that was in the office sitting over there on a desk running Jenkins and that was the resources that I had to manage. Every once in a while I had to go in there and apply patches or like figure out why Jenkins had crashed, right?
Well this now is a fully serverless pipeline and fully serverless build I can actually kick off a bunch of these pipelines or builds in parallel without ever thinking about what resources or what machine behind the scenes is actually running this.
Paavan: One question around the scaling ability. Like if I’m a developer responsible for a microservice around payment and it’s Christmas and I’m using Copilot for this right so would my app scale with the traffic? Let’s say your reverse string app… how much load can it handle? Like is there a way to test the load or what’s the underlying scaling mechanisms?
Nathan: Yeah so right now it doesn’t auto scale out of the box by default but I go to this manifest right here and i can increase the CPU and memory and I can also increase the count of tasks so you know if I had an application that had a gigantic Christmas spike I’d give this some beefy CPU and memory and I’d probably launch like 10, 20… however many that I actually needed. And it would just run and with Fargate I wouldn’t have to think about the EC2 services behind the scenes, I’d just get that many Fargate tasks.
Adam: But I do know there is a Github issue out there I believe. It was David that put it out talking about auto scaling and what we’re thinking about with implementing this so this is a functionality that will be introduced to the Copilot CLI at some point…. And there you go this is why I love having the people who build it here in the chat so you should be releasing the next version of Copilot. So they move pretty quickly it’s fun to watch!
Nathan: Oh yeah there’s new releases almost every week so it’s very exciting to see.
Chat from Bo Shipley: I’m new to AWS and still learning. Can I use Copilot with the free tier?
Adam: There was a question: I’m new to AWS and still learning. Can I use Copilot with the free tier? So do we have Fargate free tier? That’s a good question.
Nathan: Unfortunately Fargate is not currently in the free tier.
Adam: Plus ther’s VPC, their’s networking and ingress, egress, there’s quite a few things that go into that right? So probably not.
Nathan: On the bright side running a Fargate task even all month is fairly cheap. Like if you’ve ever been to one of our Fargate workshops and got an AWS credit code chances are they’ll fully cover the cost of running a small Copilot task: one task with 256 cpu, 512 memory, even if you just have like a tiny like $10 credit on your account you should be good. Maybe one of the developers can clarify because I know we put a lot of work into optimizing this for cost. I don’t remember what it came out to like what the rough cost was for running this container for a month but I remember it was tiny. It was like the cost of a cup of coffee or something, right? So if you do… if you are lucky to have attended one of our events and gotten a learning credit to apply on your account chances are it’s going to cover it. Hopefully we’ll be able to be running some more like online workshops and stuff. I know we used to do so many of those in person workshops and give out those credits but we need to find a way to give those credits out now.
Adam: Definitely, and a little little teaser is speaking of workshops we are working on Copilot integration with ECS workshop.
Nathan: Yeah so that will give you a chance to work on Copilot as well.
All right so let me show this pipeline now it’s been progressing as we were talking so started out with source. It went to build a phase which was actually building my image and pushing it and then it went to the deploy phases. So the first phase is deploying to the test environment which was using CloudFormation and that actually updated my service on ECS with the new version of the service. And then it went to a test phase which is actually running the test so I can see the output of these different phases by clicking on details.
So let me show that the this phase first which was the actual CloudFormation deployment. I can see if I click through here the events. I can see as each of these different resources got updated, the task definition and the service got updated to a new version of my service. And then I can actually see the test output by clicking on details. And as I scroll down through here I can see that it installed my test dependencies. And then it started running my actual tests against the service which in this case was testing to see if I was able to reverse the string.
And I actually just realized that apparently I jumped the gun and I pushed the newer version of the code rather than the previous version so I actually already pushed my integration test for reversing a simple string as well as reversing a string that contains a UTF-8 characters because I pushed the wrong branch. But basically I’m able to see that these tests have passed here but I can actually go back and verify that I can break these tests and verify that. So let me go back here and let’s break it.
Paavan: Nathan just a quick time check we have 10 minutes actually.
Nathan: Oooh I don’t know if I have time to actually break the build and verify the test can actually block the update from hitting production.
Adam: But you have a blog post on this right which we shared in the chat so we should be able to reference that and see this example.
Nathan: Yeah basically like the thing I want to get across though, the most important thing, is that if I’m in this pipeline and I do have an issue with my service where the tests fail this part turns red. It never actually goes through to deploying to prod so that serves as this important safeguard. I can be confident that I pushed my code, the code deployed, the test ran against it and because they passed, it was able to go through to production.
So this serves as a safeguard against accidentally breaking my deployment in production, and the important thing also that I want to I want to point out is that I can kick all this off by just doing a git push.
So if I commit a change to my application and I did git commit, git push this pipeline starts running again automatically. I don’t even actually have to use Copilot commands anymore. I actually don’t have to have Copilot installed on my machine.
So this is important because when you have multiple developers in your organization anytime you go over one developer you end up with drift like maybe one person will have one version of the tooling installed another person will have another version installed and what I’ve ran into in the past is that one developer maybe is trying to utilize tooling to build and push the application and another developer has like an older version of that same tooling and then like pulling something back or breaking something. I found this very important to have this kind of centralized pipeline like Copilot sets up because it ensures that the tooling is 100% consistent and everybody is using all this same tooling inside the pipeline for the source and build and deploy. You don’t have that problem anymore.
Chat message from mikeputnam: How does Copilot handle state when there are say 10 developer on a team all creating features in the same repo, yet simultaneously not having access to the final production AWS account?
Adam: So there was a question around that which will pop up again. How does Copilot handle state when there are say 10 developer on a team all creating features in the same repo, yet simultaneously not having access to the final production AWS account? And I think you touched on a lot of that and I think Copilot’s good to get you this. Everything is built really easily and once you have the pipeline up, Git becomes your source of truth, so you have 10 developers… if all 10 commits get merged at the same time basically this is just CodePipeline is going to run in the order that it receives each commit.
Nathan: So yeah it’ll actually queue them up so let’s say I do
make a change here. I’m just going to make a little change that
adds a comment and then I go back over here:
making a change, and I do
git push. You’ll see that it’ll pick that up
after a few seconds or minutes. I’ve seen like 30
seconds, sometimes a minute or two, it’ll pick up that there was
a change to the source. And what happens is if another developer
also pushes and change at the same time the pipeline runs through in a linear
So let’s say one build went through the source then it reached the build phase well the second build would be coming down the line from source it would hit source and then block and wait for the the next phase of the build to succeed. So you can have multiple things going through.
Here you can see this updated, it found my source and pulled the source
down and it reached the build phase. So it’s pushing that
comment change right now, so now if I go back over and change this
again: “test two” and I repeat that process of
git add and then
commit that again you’ll see I could actually have multiple
developers pushing changes simultaneously and the pipeline is not
going to break.
It is capable of realizing that there’s multiple things in the pipeline in multiple different phases and it won’t allow them to progress past each other. In fact CloudFormation, which is the mechanism used for updating the service also has built-in protections where only one stack update can actually happen at a time. So as it goes through from build to the deploy phase that will also ensure that only one change happens at a specific time before it goes through to the next phase.
Adam: So Efe did mention a good point. Someone asked is it still in beta? It’s not officially GA but we’re careful not to introduce any breaking changes. We’re in preview to collect as much feedback as possible at the moment so I think its important to just consider.
But I want to say one last thing on on that question from from Mike in chat. I think Copilot is going to get you the environments and everything you need to get your application from test to production and once you have that all deployed use git.
Don’t continue to use Copilot to iterate on changes for the same app. Git becomes your source of code review, and then once that code’s merged let the CI/CD handle the rest obviously. And this is where testing is is really important right? So as you can see Nathan added these tests because prior to our code getting to production we want to make sure we are testing in our test environment automatically right?
You notice Nathan didn’t deploy any humans to run these tests he had code to run these tests, so the idea is your code should test itself and let you know if something’s failed prior to it getting released to production.
Nathan: It’s actually all code all the way down. It’s code verifying code. That’s that’s cool thing about test that I love.
Adam: So much code. I love it.
Nathan: So I just want to show this really quick because it answers that question about multiple devs. So here you can see there’s multiple stages here they’re actually blue: they’re in progress.
So it’s building a new version of the application at the same time that it is deploying the previous version of the application and these things are queued up, so this build won’t actually deploy until this deployment is done and has verified and tested. So this is how we ensure that the traffic can stack up and everything will queue up but you won’t have conflicts.
Paavan: Great, it’s great to see that! Everything in terms of the pipeline is all set up through Copilot and then the developers can just work on the on the Github repository. One one last question before we close. I think it’s a simple one is this using a Firecracker or a Bottlerocket instance on the backend?
Nathan: So this is using Fargate behind the scenes so we’re not currently using Bottlerocket. Bottlerocket is more for when you’re running on EC2 instance and I think we’ll talk more about Bottlerocket. We’ve already had a few different sessions on Bottlerocket here on Containers from the Couch so we’ll see see more about that in the future.
Adam: Nice, well okay so Nathan thank you so much for coming on the show this was a an awesome demo. Great questions from the audience so thank you everyone for for participating and chatting with us.
So you can get this information. Let me just run through this: So the Copilot CLI on Github. If you’re interested check it out And great walkthroughs by the way just in the Github repo.
Then Nathan’s blog as well. So everything Nathan talked about today he wrote a really awesome blog about so if you want to kind of run through this in written form check out his blog which will walk you through everything he talked about today.
Someone had asked about links and resources to learning about ECS and Fargate:
So check those out.
But thank you everybody so much! We will see you next week.
Paavan: Thanks everyone