0

I've been trying for hours to get a basic CodePipeline up and running with ECS Green/Blue. We're creating a prototype, so everything is in a single repo for now, and the Dockerfile, taskdef.json, appspec.yaml are all in a subfolder. I'm getting the following error:

The ELB could not be updated due to the following error: Primary taskset target group must be behind listener arn:aws:elasticloadbalancing:eu-west-1:123:listener/app/api-lb/123/123

It's a standard pipeline:

  1. CodeCommit
  2. CodeBuild
  3. CodeDeploy to ECS

Commit and Build work and I'm passing the artifacts to the last step. Deployment fails though, with the error mentioned above.

I have no idea what exactly the "primary taskset target group" is. Both target groups are behind a listener.

I know there are Terraform modules for this, but I wanted to modify/extend this after it works, so I decided to start from scratch. Went through a dozen different articles, but I can't figure out what's wrong with my ECS config.

I created a repo with all the files here, here are the relevant the lb, ecs and CodeDeploy configs:

lb:

resource "aws_lb" "lb" {
  name               = "${var.name}-lb"
  internal           = false
  load_balancer_type = "application"
  subnets            = [aws_default_subnet.default_az1.id, aws_default_subnet.default_az2.id]
  security_groups    = [aws_security_group.blue_green.id]
}

resource "aws_lb_target_group" "blue" { name = "${var.name}-blue" port = 80 target_type = "ip" protocol = "HTTP" vpc_id = aws_default_vpc.default.id depends_on = [ aws_lb.lb ] health_check { path = "/" protocol = "HTTP" matcher = "200" interval = "10" timeout = "5" unhealthy_threshold = "3" healthy_threshold = "3" } tags = { Name = var.name } }

resource "aws_lb_target_group" "green" { name = "${var.name}-green" port = 80 target_type = "ip" protocol = "HTTP" vpc_id = aws_default_vpc.default.id depends_on = [ aws_lb.lb ] health_check { path = "/" protocol = "HTTP" matcher = "200" interval = "10" timeout = "5" unhealthy_threshold = "3" healthy_threshold = "3" } tags = { Name = var.name } }

resource "aws_lb_listener" "green" { load_balancer_arn = aws_lb.lb.arn port = "80" protocol = "HTTP" default_action { type = "forward" target_group_arn = aws_lb_target_group.green.arn } depends_on = [aws_lb_target_group.green] }

resource "aws_lb_listener" "blue" { load_balancer_arn = aws_lb.lb.arn port = "8080" protocol = "HTTP" default_action { type = "forward" target_group_arn = aws_lb_target_group.blue.arn } depends_on = [aws_lb_target_group.blue] }

codedeploy

resource "aws_codedeploy_app" "app" {
  compute_platform = "ECS"
  name             = var.name
}

resource "aws_codedeploy_deployment_group" "group" { app_name = var.name deployment_config_name = "CodeDeployDefault.ECSAllAtOnce" deployment_group_name = "${var.name}-group-1" service_role_arn = aws_iam_role.codedeploy-service.arn

auto_rollback_configuration { enabled = true events = ["DEPLOYMENT_FAILURE"] }

blue_green_deployment_config { deployment_ready_option { action_on_timeout = "CONTINUE_DEPLOYMENT" }

terminate_blue_instances_on_deployment_success {
  action = "TERMINATE"
}

}

deployment_style { deployment_option = "WITH_TRAFFIC_CONTROL" deployment_type = "BLUE_GREEN" }

ecs_service { cluster_name = aws_ecs_cluster.cluster.name service_name = aws_ecs_service.api.name }

load_balancer_info { target_group_pair_info { prod_traffic_route { listener_arns = [aws_lb_listener.green.arn] }

  test_traffic_route {
    listener_arns = [aws_lb_listener.blue.arn]
  }

  target_group {
    name = aws_lb_target_group.blue.name
  }

  target_group {
    name = aws_lb_target_group.green.name
  }
}

} }

ecs:

resource "aws_ecs_cluster" "cluster" {
  name = var.name

configuration { execute_command_configuration { kms_key_id = aws_kms_key.keys.arn logging = "OVERRIDE"

  log_configuration {
    cloud_watch_encryption_enabled = true
    cloud_watch_log_group_name     = aws_cloudwatch_log_group.logs.name
  }
}

} }

resource "aws_ecs_task_definition" "api" { family = "api" memory = 512 cpu = 256 requires_compatibilities = ["FARGATE"] network_mode = "awsvpc" execution_role_arn = aws_iam_role.ecs-task-service.arn

container_definitions = jsonencode([ { name = var.name, image = "${aws_ecr_repository.repo.repository_url}:latest" essential = true portMappings = [ { "containerPort" = 8080, "hostPort" = 8080 } ], environment = [], log_options = { awslogs-region = var.region awslogs-group = var.name awslogs-stream-prefix = "ecs-service" } } ]) }

resource "aws_ecs_service" "api" { name = var.name cluster = aws_ecs_cluster.cluster.id task_definition = aws_ecs_task_definition.api.arn launch_type = "FARGATE" desired_count = 1 depends_on = [aws_lb_listener.green, aws_lb_listener.blue, aws_iam_role_policy.ecs-service-base-policy]

deployment_controller { type = "CODE_DEPLOY" }

load_balancer { target_group_arn = aws_lb_target_group.blue.arn container_name = var.name container_port = 8080 }

network_configuration { subnets = [aws_default_subnet.default_az1.id, aws_default_subnet.default_az2.id, aws_default_subnet.default_az3.id] security_groups = [ aws_security_group.ecs.id ] assign_public_ip = false }

lifecycle { ignore_changes = [ load_balancer, desired_count, task_definition ] } }

Any hints on what I'm doing wrong? Thanks!

Patrick
  • 129
  • 1
  • 5

2 Answers2

2

The underlaying issue was an IAM permission error. After clicking through the stack, the "tasks" in the cluster didn't start because of missing permissions/roles.

Patrick
  • 129
  • 1
  • 5
0

I've had the same case. After hours of digging around Load Balancer, Target Group, Listener and Cluster/Service configurations, discovered the reason. It was about the production port configured in the Deployment Group.

A Blue/green deployment expects the Load Balancer's listener for the app's "production port" to be directed to a target group so it can change that group with the other one.

In my case, I had listeners for port 80, 90 (for test) and 443.

The Deployment Group was configured for port 80 as the production port but the load balancer's listener for port 80 did not have a target group behind it. All it had was a re-route to 443.

So the problem was about the Deployment Group configuration. I changed the production port in the Deployment Configuration to port 443 and everything started working.

e-mre
  • 101
  • 1