Define action in CodePipeline calls Lambda function to change desired number of Fargate tasks

Define action in CodePipeline calls Lambda function to change desired number of Fargate tasks

CodePipeline can be configured with a variety of actions, but this time we will consider calling a Lambda function within the pipeline.

This time, we will call a Lambda function in the pipeline to change the desired number of ECS tasks in the ECS (Fargate) service.


Diagram of define action in CodePipeline calls Lambda function to change desired number of Fargate tasks.

We will configure CodePipeline to link three resources.

The first is CodeCommit.
CodeCommit is responsible for the source stage of CodePipeline.
It is used as a Git repository.

The second is CodeBuild.
CodeBuild is in charge of the build stage of CodePipeline.
It builds a Docker image from code pushed to CodeCommit.
The built image is pushed to ECR.

The third is the Lambda function.
The function’s job is to change the desired number of tasks for the ECS (Fargate) service.
Specifically, it changes the desired number from 0 to 1.

Create a deploy stage in CodePipeline.
Configure it to deploy to Fargate as described below.

Save your DockerHub account information in the SSM parameter store.
These will be used to pull the base image when generating images with DockerBuild, after signing in to DockerHub.

The trigger for CodePipeline to be started is conditional on a push to CodeCommit.
Specifically, we will have a rule in EventBridge that satisfies the above.

Create a Fargate type ECS on a private subnet.

Create an EC2 instance.
Use it as a client to access containers created on Fargate.

CloudFormation template files

Build the above configuration with CloudFormation.
The CloudFormation templates are located at the following URL

Explanation of key points of the template files

This page focuses on how to call Lambda functions within CodePipeline.

For basic information about CodePipeline, please refer to the following page

Use CodePipeline to trigger CodeCommit pushes to push images to ECR 【Use CodePipeline to trigger CodeCommit pushes to push images to ECR】 Using CodePipeline, you can build a CI/CD configuration. In this example, we will con...

For information on how to create a deployment stage in CodePipeline and deploy to ECS (Fargate), please refer to the following page

Use CodePipeline to build and deploy images to Fargate 【Use CodePipeline to build and deploy images to Fargate】 On the following page, we showed you how to configure a pipeline using CodePipeline to push images...

For information on how to build Fargate on a private subnet, please refer to the following page

Create ECS (Fargate) in Private Subnet 【Create ECS (Fargate) in Private Subnet】 The following page shows how to create a Fargate type ECS container.

Use CloudFormation custom resources to automatically delete objects in S3 buckets and images in ECR repositories when deleting the CloudFormation stack.
For more information, please see the following page

Create and Delete S3 Object by CFN Custom Resource 【How to create/delete S3 objects during stack creation/deletion with CloudFormation custom resources】 CloudFormation custom resources can perform any actio...
Delete ECR images using CloudFormation Custom Resources 【Delete ECR images using CloudFormation Custom Resources】 If you use CloudFormation to create an ECR and push an image to it, you may encounter an error du...

Explanation of key points of template files


    Type: AWS::CodePipeline::Pipeline
        Location: !Ref BucketName
        Type: S3
      Name: !Ref Prefix
      RoleArn: !GetAtt CodePipelineRole.Arn
        - Actions:
            - ActionTypeId:
                Category: Source
                Owner: AWS
                Provider: CodeCommit
                Version: 1
                BranchName: !Ref BranchName
                OutputArtifactFormat: CODE_ZIP
                PollForSourceChanges: false
                RepositoryName: !GetAtt CodeCommitRepository.Name
              Name: SourceAction
                - Name: !Ref PipelineSourceArtifact
              Region: !Ref AWS::Region
              RunOrder: 1
          Name: Source
        - Actions:
            - ActionTypeId:
                Category: Build
                Owner: AWS
                Provider: CodeBuild
                Version: 1
                ProjectName: !Ref CodeBuildProject
                - Name: !Ref PipelineSourceArtifact
              Name: Build
                - Name: !Ref PipelineBuildArtifact
              Region: !Ref AWS::Region
              RunOrder: 1
          Name: Build
        - Actions:
            - ActionTypeId:
                Category: Deploy
                Owner: AWS
                Provider: ECS
                Version: 1
                ClusterName: !Ref ECSClusterName
                FileName: !Ref ImageDefinitionFileName
                ServiceName: !Ref ECSServiceName
                - Name: !Ref PipelineBuildArtifact
              Name: Deploy
              Region: !Ref AWS::Region
              RunOrder: 1
          Name: Deploy
        - Actions:
            - ActionTypeId:
                Category: Invoke
                Owner: AWS
                Provider: Lambda
                Version: 1
                FunctionName: !Ref ECSFunctionName
              InputArtifacts: []
              Name: Invoke
              OutputArtifacts: []
              Region: !Ref AWS::Region
              RunOrder: 1
          Name: Invoke
Code language: YAML (yaml)

Define the stages for invoking Lambda functions in the Stages property.
Set the function to be invoked in the Configuration property. Specifically, specify the name of the function to be invoked in the FunctionName property.
Do not set valid artifacts in the InputArtifacts and OutputArtifacts properties. This is because the function to be called this time is to adjust the desired number of ECS tasks, so no reading or writing of artifacts will occur.

Here are the IAM roles for CodePipeline

    Type: AWS::IAM::Role
        Version: 2012-10-17
          - Effect: Allow
              - sts:AssumeRole
        - PolicyName: PipelinePolicy
            Version: 2012-10-17
              - Effect: Allow
                  - lambda:invokeFunction
                  - !Sub "arn:aws:lambda:${AWS::Region}:${AWS::AccountId}:function:${ECSFunctionName}"
              - Effect: Allow
                  - codecommit:CancelUploadArchive
                  - codecommit:GetBranch
                  - codecommit:GetCommit
                  - codecommit:GetRepository
                  - codecommit:GetUploadArchiveStatus
                  - codecommit:UploadArchive
                  - !GetAtt CodeCommitRepository.Arn
              - Effect: Allow
                  - codebuild:BatchGetBuilds
                  - codebuild:StartBuild
                  - codebuild:BatchGetBuildBatches
                  - codebuild:StartBuildBatch
                  - !GetAtt CodeBuildProject.Arn
              - Effect: Allow
                  - s3:PutObject
                  - s3:GetObject
                  - s3:GetObjectVersion
                  - s3:GetBucketAcl
                  - s3:GetBucketLocation
                  - !Sub "arn:aws:s3:::${BucketName}"
                  - !Sub "arn:aws:s3:::${BucketName}/*"
              - Effect: Allow
                  - ecs:*
                Resource: "*"
              - Effect: Allow
                  - iam:PassRole
                Resource: "*"
Code language: YAML (yaml)

Set permissions to call Lambda functions.

Lambda Function

    Type: AWS::Lambda::Function
        ZipFile: |
          import boto3
          import os

          cluster_name = os.environ['CLUSTER_NAME']
          count = int(os.environ['COUNT'])
          service_name = os.environ['SERVICE_NAME']

          ecs_client = boto3.client('ecs')
          codepipeline_client = boto3.client('codepipeline')

          def lambda_handler(event, context):
            job_id = event['CodePipeline.job']['id']

              describe_services_response = ecs_client.describe_services(

              if describe_services_response['services'][0]['desiredCount'] > 0:

              update_service_response = ecs_client.update_service(


            except Exception as e:
                  'type': 'JobFailed',
                  'message': 'Something happened.'
          CLUSTER_NAME: !Ref ECSClusterName
          COUNT: 1
          SERVICE_NAME: !Ref ECSServiceName
      FunctionName: !Sub "${Prefix}-function-ecs"
      Handler: !Ref Handler
      Runtime: !Ref Runtime
      Role: !GetAtt ECSFunctionRole.Arn
Code language: YAML (yaml)

Define the code to be executed by the Lambda function in inline notation.
For more information, please refer to the following page

3 parterns to create Lambda with CloudFormation (S3/Inline/Container) 【Creating Lambda with CloudFormation】 When creating a Lambda with CloudFormation, there are three main patterns as follows. Uploading the code to an S3 buc...

The Environment property allows you to define environment variables that can be passed to the function.
Specifically, the ECS cluster and service for which the number of ECS tasks will be adjusted, and the adjusted value.

The code is as follows

  • Use the describe_services method to get the status of the ECS services and check the desired number.
  • Check the number of tasks, and if the desired number is 0, continue the process.
  • Use the update_service method to change the desired number of ECS services.
  • Call the API for CodePipeline.

The last API is the key point.
The official AWS page mentions the following

As part of the implementation of the Lambda function, there must be a call to either the PutJobSuccessResult API or PutJobFailureResult API. Otherwise, the execution of this action hangs until the action times out.

AWS Lambda

This means that after changing the number of requests, one of the above two APIs must be executed.
In this case, we will execute the PutJobSuccessResult API (put_job_success_result method) if the desired number of jobs was successfully changed, and the PutJobFailureResult API (put_job_failure_ result) if any error occurs during processing.

(Reference) Application Container


FROM amazonlinux

RUN yum update -y && yum install python3 python3-pip -y

RUN pip3 install bottle


CMD ["python3", ""]

Code language: Dockerfile (dockerfile)

The image for the app container will be based on Amazon Linux 2.

We will use Bottle, a Python web framework.
So after installing Python and pip, install this.

Copy the Python script ( describing the app logic and set this to run.

As mentioned earlier, the app listens for HTTP requests on 8080/tcp, so expose this port.

from bottle import route, run

def hello():
  return 'Hello CodePipeline.'

if __name__ == '__main__':
  run(host='', port=8080)
Code language: Python (python)

We will use Bottle to build a simple web server.
The simple configuration is to listen for HTTP requests at 8080/tcp and return “Hello CodePipeline”.


Use CloudFormation to build this environment and check the actual behavior.

Create CloudFormation stacks and check resources in stacks

Create a CloudFormation stack.
For information on how to create stacks and check each stack, please refer to the following page

CloudFormation’s nested stack 【How to build an environment with a nested CloudFormation stack】 Examine nested stacks in CloudFormation. CloudFormation allows you to nest stacks. Nested ...

After checking the resources in each stack, information on the main resources created this time is as follows

  • ECR: fa-079
  • CodeCommit: fa-079
  • CodeBuild: fa-079
  • CodePipeline: fa-079
  • Lambda function: fa-079-function-ecs

Check the created resources from the AWS Management Console.
Check the ECS (Fargate) cluster service.

Detail of ECS 1.
Detail of ECS 2.

The cluster service has been successfully created.
The point is that the desired number of tasks is 0.
The point is that the number of desired tasks is zero, which means that during the initial Fargate build, not a single task is launched.

Check CodePipeline.

Detail of CodePipeline 1.

The pipeline is failing to execute.
This is because the pipeline was triggered by the creation of CodeCommit when the CloudFormation stack was created.
Since we are not pushing code to CodeCommit at this time, an error occurred during the pipeline execution process.

Note the stage that is being created.
The stage named Invoke, which calls the Lambda function, is prepared at the end of the pipeline.
This means that the code pushed by CodeCommit will be used to build a Docker image, deploy the image to Fargate, and then change the desired number of Fargate tasks.

Check Action

Now that we are ready, we push the code to CodeCommit.

First, pull CodeCommit.

$ git clone
Cloning into 'fa-079'...
warning: You appear to have cloned an empty repository.
Code language: Bash (bash)

An empty repository has been pulled.

Add the Dockerfile and to the repository.

$ ls -al
total 8
drwxrwxr-x 3 ec2-user ec2-user  51 Aug 20 08:31 .
drwxrwxr-x 3 ec2-user ec2-user  20 Aug 20 08:31 ..
-rw-rw-r-- 1 ec2-user ec2-user 187 Aug 12 11:01 Dockerfile
drwxrwxr-x 7 ec2-user ec2-user 119 Aug 20 08:31 .git
-rw-rw-r-- 1 ec2-user ec2-user 681 Aug 20 02:57
Code language: Bash (bash)

Push the two files to CodeCommit.

$ git add .

$ git commit -m "first commit"
[master (root-commit) 7e41437] first commit
 2 files changed, 39 insertions(+)
 create mode 100644 Dockerfile
 create mode 100644

$ git push
 * [new branch]      master -> master
Code language: Bash (bash)

The push was successful.

After waiting for a while, check CodePipeline again.

Detail of CodePipeline 2.

The pipeline has completed successfully.

Check Fargate.

Detail of ECS 3.
Detail of ECS 4.

The desired number is now 1.
This means that the Lambda function has been invoked and the desired number has been changed.
Since the desired number is now 1, an ECS task is automatically generated.
Looking at the details of this task, we can see that the assigned private address is “”.

It accesses the EC2 instance in order to make an HTTP request to the container.
Use SSM Session Manager to access the instance.

% aws ssm start-session --target i-0c41c5926230b480c

Starting session with SessionId: root-0c76e08548a26fec6
Code language: Bash (bash)

For more information on SSM Session Manager, please refer to the following page

Accessing Linux instance via SSM Session Manager 【Configure Linux instances to be accessed via SSM Session Manager】 We will check a configuration in which an EC2 instance is accessed via SSM Session Manag...

Use the curl command to access the container in the task.

sh-4.2$ curl
Hello CodePipeline.
Code language: Bash (bash)

The container responded.
It is indeed the string we set up in the Bottle app.
This shows that the number of ECS tasks can be changed by calling the Lambda function within CodePipeline.


We were able to change the desired number of ECS tasks in the ECS (Fargate) service by invoking a Lambda function in the pipeline.