Enter your search

Running a Node app on Amazon ECS

By
The EC2 Container Service Mega-Walkthrough

Running Node on Amazon ECS

Amazon ECS ventures into the wonderful world of containers, specifically for running containerised apps on AWS. You can choose to have ECS run containers for you, or place them on EC2 instances.

Since building the ECS launch demo for AWS back in 2014, we thought we ought to also try the service itself!

So here’s some of our experiences we’ve had with ECS and how we set up the infrastructure using ECS to run Node apps.

Why ECS?

Before we get into that though, why use ECS?

There are so many options for deploying apps out there, but if you’re running stuff on EC2 then ECS is definitely worth checking out.

Here are a bunch of things we like about it:

  • Centralised Logs – Just like how Lambda’s logs work, ECS ships stdout/err streams from all containers to a single log group in CloudWatch logs. No more extra instrumenting to get our logs in one place, and they are searcheable there, too.
  • Tests, CI & Deploy – working with containers means you can wrap your app and its dependencies in one container, and only containers that pass tests are deployed. They can also be linked to externals in other containers such as Elasticsearch / Redis and tested with these at build time. * with a little extra magic from our friends CodeBuild and CodePipeline.
  • Scheduler & supervisor – ECS looks after your app. It will move it across hosts if necessary, monitors its health, and replaces processes if they become unwell.
  • Metrics – The essentials – CPU and memory metrics – are built in. It’s another thing you don’t have to worry about configuring yourself.
  • Scaling – Backed by the all-powerful Autoscaling, ECS can also scale the app when load fluctuates.
  • Load balancing – Busy service? No sweat. Balance that traffic across containers like a boss!

Altogether, when used in conjunction with CodeBuild and CodePipeline, it supplants an awful lot of deployment tooling we would normally have to maintain ourselves.

Cons

As is the case with anything, nothing’s perfect, and we find there are some downsides of ECS to be aware of:

  • EC2 instancesYou have to set up EC2 instances for your ECS cluster yourself. We would love to see a version where AWS managed this behind the scenes so you only have to worry about the containers. * This is no longer the case – see update below!
  • Instance termination – When instances terminate, it’s up to you to drain container tasks off the instance before it shuts down. Otherwise, your tasks could be killed uncleanly. Later we look at how to do this with a Lambda function.
  • Spot instance termination – Similar to the last point, when spot instances terminate they vanish and take your tasks with them. Not ideal! Since it is possible to ask a spot instance if it is about to terminate, we think the ECS agent should do this and drain tasks beforehand.
  • Initiating deploys – It’s not exactly intuitive – you have to know enough about ECS services to figure out that updating task definitions is the way to get ECS to push new containers – and new versions of your app – out to the cluster.

So there’s a few things to iron out before ECS becomes a really nice experience for developers, but it’s still a very useful service. So let’s crack on and see how we got set up.

* Update 2017-12-22: AWS announced Fargate at Re:Invent ’17: you can now choose to deploy containers without EC2 instances! If you wish to deploy with Fargate instead of EC2, you can skip “The Cluster Stack” step and move straight to the “Ship-it” stack.

1. Provisioning the infrastructure

First off, we need to get everything provisioned in AWS:

  1. We need a bunch of EC2 instances to form the basis of our ECS cluster.
  2. Then, we need a CodeBuild project to produce a container from our source code.
  3. CodePipeline pulls the steps together into a linear deployment flow, from source to shippage.
  4. Finally, we can sort out the rest, including the ECS service itself and some autoscaling groups.

As there’s quite a lot involved in the process, we’ll split this out into two separate posts, and focus on points 1-3 in this-a-here post.

Here be dragons

Before we get onto the above though, we need to address a couple of the aforementioned cons first. The questions of:

  • How do we deploy updated versions to ECS and –
  • How do we drain containers off departing instances?

Never fear, Lambda is here!

1.a. Setting up the supporting Lambdas

Ah, Lambda. The trusty companion for all the grizly workarounds in AWS.

In this case we need two functions, one to address each situation:

  • Deployer Lambda – The deployer Lambda knows enough ECS etiquette to get it deploying our updated container once it’s freshly baked out of CodeBuild and sitting in our container registry.
  • Lifecycle Lambda – The Lifecycle Lambda is our safety marshal, making sure that EC2 doesn’t destroy any instances until all containers have been herded off them safely.

Enter, Serverless

If you haven’t encountered it already in the burgeoning serverless movement – there’s a handy framework called ‘severless‘ (go figure) for constructing Lambda-backed services.

More broadly it’s a whole different methodology of deploying services but we won’t go into all that here.

Back on topic, we chose serverless to help us get these Lambdas set up.

Free gift – this code is open source

The good news is we’ve released the code, so you don’t have to do any of this yourself!

If you want to take a look you can clone it on GitHub and have a poke around:

git clone git@github.com:gosquared/ops-lambdas.git
cd ops-lambdas

It should be fairly plug-and-play, just take a look at serverless.yml and change the values as necessary.

What does this do?

Serverless glues together our Lambda code using yaml configuration and can provision it all to AWS.

It uses CloudFormation behind the scenes which lets us use a little trick to hook the Lambdas into the stuff we’ll set up later on.

How do I provision the Lambdas?

Once you’ve tweaked the serverless.yml it’s simply an npm run deploy.

1.b. Now we bring out the big guns

I can’t believe we’ve got this far without any code (embedded in the post, at least). Well sorry, you’re not getting off that easy.

This is where it gets a little bit heavy, so bear with me as we wade through some thick CloudFormation material.

The CloudFormation Stacks

What we’re going to do is break down the infrastucture into two CloudFormation stacks:

  1. The Cluster Stack – which will set up the EC2 instances for our cluster.
  2. The Ship-it Stack – all the stuff for deploying our code (CI pipeline for the fancy term).

The ship-it stack will also set up the ECS parts for running and scaling the app.

The Cluster Stack

Update 2017-12-22: AWS announced Fargate at Re:Invent ’17, allowing you to deploy containers without EC2 instances. If you choose to deploy with Fargate, you can skip this step and move straight onto the next stack.

I have to admit, I originally had everything in one stack first time around, but split into two for fear of reaching the onerous 50kb CF stack size limit.

50kb you say! Yeah, it can happen. But not today.

Here is the cluster stack:

Not so bad. If you peruse that, you’ll see we’ve got an autoscaling group to control our instances.

You might also spot the mentions of !ImportValue ops-lambdas-prod:... which is the little trick we mentioned earlier to integrate the serverless lambdas into this stack.

The UserData is mostly boilerplate lifted from the ECS docs to get the Cloudwatch logging and ECS agent configured on the instance.

Setting the parameters

Note that this doesn’t set up subnets or security groups or anything like that. We assume these are already in place, and the ids for these can be given as parameters SecurityGroups and Subnets to the stack.

You’ll also want to give an InstanceType that will suit your capacity requirements. Plus there’s ImageId which is actually quite easy, it’s just an id from the ecs-optimised AMI page – we leave it to you to specify the latest AMI for your region because AWS update them now and then.

There’s just one thing we haven’t figured out yet – what our ECS cluster will be called. There’s a parameter for this, EcsClusterName because the instances need to be told which cluster they will be serving. Now’s a good time to think of a name – a hint for a name could be like myapp-prod but really this could be anything you like. This will also be the name of the Ship-it stack later.

Provisioning the Cluster Stack

Once you’ve got the above parameters ready, we’re good to provision the stack. Let’s hit it! (don’t forget to sub in your params):

aws cloudformation deploy \
--stack-name app-cluster-prod \
--template-file ./aws-cluster-stack.yaml \
--parameter-overrides \
KeyName=DEFAULT \
SecurityGroups=group1,group2 \
ImageId=ami-123456 \
InstanceType=c5.large \
Subnets=subnet-1234,subnet-5678 \
EcsClusterName=myapp-prod \
--capabilities CAPABILITY_IAM CAPABILITY_NAMED_IAM \
--no-execute-changeset

Notice the --no-execute-changeset at the end there. We’ve not actually created anything yet, we’ve just made a changeset so we can check everything is good.

If there are any configuration errors CloudFormation usually flags them up at this point. Often this is followed by some manner of to-and-fro tail-chasing to fix the stack until it is happy. Such is life with CloudFormation.

Once the changeset goes green, we send it out the door by hitting execute in the CF console.

If that worked first time for you, congrats! Any errors, see if CF gives any hints and try to work through it.

2. The Ship-it Stack

Right, now that the cluster is sorted, time for the real heavyweight.

The Ship-it stack incorporates all of our deployment pipeline and the ECS service.

Before we get into the stack, we’ve got some prep to go over first.

Preparing the pipeline

CodePipeline will need to get your code from somewhere.

GitHub will be the source code provider of choice here, but if you need something else check out the CodePipeline docs for alternatives.

We need to give CodePipeline access to the repo for it to scoop up our code. In the case of Github this can be done with an access token.

GitHub auth

To get hold of an access token, create a Personal Access Token on GitHub and grant the repo and admin:repo_hook permissions.

With this token CodePipeline can access the repos on your behalf and get the code.

It actually watches a specific branch on your repo and will start new builds when it sees new commits getting pushed to the branch.

Ship-it template

Prepare yourself for this one. It’s pretty hefty:

In true CloudFormation style the configuration is very verbose but allows us to control virtually all the settings of the infrastructure.

Create the Ship-it stack

Similar to our Cluster stack, we use the aws cli to create a change set for this stack.

Once again, customise the params to your needs, you know the drill.

Just make sure the --stack-name matches the ECS cluster name you came up with earlier.

Also, just to clarify, RepoOwner is your GitHub username / organisation name. So for https://github.com/your-org/your-repo the params would be RepoOwner=your-orgRepoName=your-repo.

aws cloudformation deploy \
  --stack-name myapp-prod \
  --template-file ./aws-ship-it-stack.yaml \
  --parameter-overrides \
     KeyName= \
     GitHubAuthToken= \
     RepoOwner= \
     RepoName= \
     BranchName= \
  --capabilities CAPABILITY_IAM CAPABILITY_NAMED_IAM \
  --no-execute-changeset

 

There’s a fair bit more infrastructure being laid out here, so check everything’s gone through properly. If not, CF should tell you what’s up and you can delete the stack and try again.

Note – if you do delete the stack, there are a couple of parts that need manual cleanup: CF will show you what these are. Everything else should clean up automatically.

Wrapping up part 1

There we go. With that last stack, we now have all the infrastructure in place ready to host our app.

There won’t be any instances running yet (unless you’ve changed that).

In the next post we’ll go through the whole deployment process and get the app up and running.

Proceed to Part 2.

You May Also Like

Group 5 Created with Sketch. Group 11 Created with Sketch. CLOSE ICON Created with Sketch. icon-microphone Group 9 Created with Sketch. CLOSE ICON Created with Sketch. SEARCH ICON Created with Sketch. Group 4 Created with Sketch. Path Created with Sketch. Group 5 Created with Sketch.