15

I am using AWS Code Pipeline, Code Build to create a new Docker container and push it to ECR.

My application is a simple straight-forward single-container based. What would be less friction approach to pull down current running Container and re-launch a new Container from the ECS registry (output of Code Build thru Code Pipeline).

I tried CloudFormation with EC2 user data, custom scripts on one side and CloudFormation with ECS with task definition on the other side (not successful yet). I strongly feel there must be more obvious and simpler approach.

Dawny33
  • 2,816
  • 3
  • 24
  • 62
Naveen Vijay
  • 259
  • 2
  • 11

3 Answers3

18

I would keep the ECS container instances (I'm talking about the Docker hosts - I don't like AWS terminology here) and the deployment as two separate things.

Get your ECS stack up and running. You can manage it through CloudFormation and Auto-scaling groups, that's fine. Just think of your cluster as a platform where you will deploy to, not something you need to redeploy.

Then, for CD, the easiest method by far is to update the service definition to use a new task definition and let ECS rolling update the containers for you.

Every time it start a task, ECS will run docker pull image:tag even if it has the image locally to make sure it has the latest version of that image:tag. So the image tag you use really don't matter (there is no need to change the tag on every build).

That means that you can build myimage:latest over and over in order to deploy it easily.

What you need is a task definition where the image = myimage:latest. Create a service with that task definition and every time that ECS start a task (an instance of your service) it will be the most recent "myimage:latest" you have built.

From there, you are missing only one piece in the puzzle, from CodeDeploy, you can call something, perhaps a lambda function, to create a new revision of your task definition and update your service and ECS will automatically create new tasks for that revision and remove the old tasks.

An example:

Let's assume you have created a service called MyService. That you have configured that service to run 2 tasks for the task definition MyTaskDefinition:1 (revision 1). In that task definition, you have one container definition which image is set to "myimage:latest".

  1. Yesterday you have build myimage:latest which had the ID (SHA) 365d8f7bf565.
  2. Your container instance ABC is running a task named MyTaskDefinition-1-containerName-someLongId. when you inspect that container, it's running the image "sha256:365d8f7bf565.........."
  3. Your other container instance DEF is running another task. It has a similar name (only the ID differ), but it's running the same image.
  4. You push a change to your repo.
  5. CodePipeline picks up that change, build and publish the image to ECR.
  6. That new Docker image is also myimage:latest, but its ID (SHA) is f7ec5e54ac96
  7. Now you need to add a step to your pipeline to use Lambda functions and the AWS NodeJS SDK to do some calls to your cluster:
    1. Create a new task definition (which will be exactly the same as before). That will be MyTaskDefinition:2
    2. Update your MyService to use MyTaskDefinition:2 (instead of 1)
  8. ECS will create new tasks. The container names will MyTaskDefinition-2-containerName-someLongId. When you inspect those containers, you'll see that they will be running "sha256:f7ec5e54ac96.......". Perhaps you'll have 2 tasks on the container instance ABC, perhaps they will be sprayed out (that depends on your service's configuration)
  9. After some time ECS will remove the old task MyTaskDefinition-1-containerName-someLongId from ABC and DEF.

Note: you don't actually need to create a new task definition. If you want you could instead retrieve the service's tasks list and manually stop them one by one. You should wait for ECS to restart a task before stoping a new one (that is: stop the first container, wait for ECS to replace it, stop the second container). When ECS restart the container, it will grab the most recent myimage:latest built, as explained before. I just think creating a new task definition is easier and less error prone (no logic required to wait and check, ECS will handle the rolling update for you if you have a new task definition).

Alexandre
  • 1,965
  • 15
  • 24
3

For simple use case described I would suggest checking Elastic Beanstalk for Docker, it's not the minimal solution like bare ECS usage, but you can benefit from auto-managed and configured services like ELB, EC2 AutoScale, health monitoring and much more.

High-level summary:

  1. Configure Elastic Beanstalk to use specific tag myimage:tested
  2. Use Code Pipeline/Build to build, test and promote tag "tested"
  3. Trigger Elastic Beanstalk deployment, which will pull promoted image myimage:tested to all instances, different deployment strategies available.

This approach based on re-using same tag, alternative approach would be generating tag with build id, for example myimage:tested-42, this will require updating Elastic Beanstalk each time with new tag, but gives more granular control on deployed revision.

rombob
  • 657
  • 5
  • 16
0

I second elastic beanstalk for its simplicity; it is very easy to setup and deploy.

If you are familiar with docker-compose, another approach would be to define docker-compose.yml and directly deploy on ECS with ecs-cli.

AnthonyWC
  • 141
  • 2