Streamline Your AWS ECS Fargate Deployments with Terraform and GitLab CI/CD using Python application.

Streamline Your AWS ECS Fargate Deployments with Terraform and GitLab CI/CD using Python application.

Diagram illustrating a CI/CD pipeline and AWS infrastructure for deploying an application.

All files of this project are saved on my GitHub repo: https://github.com/sahibgasimov/ecs-gitlab-terraform

In this tutorial, I’ll walk you through how I automated the deployment of a Python application on AWS ECS Fargate combined with Terraform infrastructure automation and GitLab CI/CD for continuous deployments. Let’s get started!

What we will cover:

  • Overview of the Architecture
  • Prerequisites
  • Setting Up the Infrastructure with Terraform
    • Creating Terraform Backend module
    • Setting up Networking infrastructure
    • Creating ECS Infrastructure
  • Configuring GitLab CI/CD
  • Deploying to Development and Production

By the end of this guide, you’ll have a fully automated pipeline ready to deploy containerized applications across dev and prod environments.

Project Architecture:

  1. AWS ECS Fargate: For hosting containerized services.
  2. Terraform: Infrastructure provisioning.
  3. GitLab CI/CD: Manages build, test, and deployment pipelines.
  4. Networking: Isolated VPC with public/private subnets.
  5. Load Balancers: Application Load Balancer (ALB) to route traffic.
  6. TLS Support: HTTPS setup with ACM and Route 53.

Prerequisites
Before diving into the setup, make sure you have:

  • AWS CLI installed and configured.
  • Terraform (v1.6.0 or higher).
  • GitLab account with sufficient permissions.
  • Docker installed for local builds.

The diagram above illustrates the goal of the project. Basically we will have source code of our application in GitLab as well as two branches DEV and PROD. Every time we git push to the relevant branch the CI/CD automation will take a place.
Let’s Go!

Setting Up the Infrastructure

We are going to create a separate terraform backend module where the whole project’s terraform state file will be eventually stored.

Step 1: Clone the Repository

git clone https://github.com/sahibgasimov/ecs-gitlab-terraform.git
cd ecs-gitlab-terraform

Github Folders Structure Tree:

├── backend/           # Terraform backend module
├── networking/        # VPC and networking module
├── ecs/               # ECS cluster and services module
├── gitlab_cicd/       # Gitlab pipeline file 
├── images/            # Architecture and CI/CD images
├── app/               # Python application code
├── .gitlab-ci.yml     # GitLab CI/CD pipeline
└── README.md          # Project documentation

Step 2: Initialize Terraform Backend
The project uses an S3 bucket for Terraform state management and DynamoDB for locking.

Comment out the following block in main.tf and run terraform apply to create backend module. Once created update with your existing s3 bucket to migrate state file to s3 bucket.

terraform {
  backend "s3" {
    bucket         = "my-project-terraform-state-prod"  # Replace with your S3 bucket name
    key            = "backend/terraform.tfstate"      
    region         = "us-east-1"                      
    dynamodb_table = "my-project-terraform-lock"       
    encrypt        = true
  }
}

Run Terraform commands to initialize and apply the configuration:

terraform init
terraform apply

Now you can uncomment, and migrate the state file to the S3 backend:

terraform init -migrate-state

Image description

Terraform Backend is now ready!

Networking

VPC Module for ECS Cluster

This module configures an AWS Virtual Private Cloud (VPC) for the ECS cluster, including private and public subnets, NAT gateway routes for private subnet, and internet gateway for public subnet.

Image description

The VPC with a CIDR block of 10.0.0.0/16, including public and private subnets across availability zones. Internet Gateway and NAT Gateway are enabled for outbound internet access from public and private subnets, respectively.
Prerequisites include configuring an S3 bucket and DynamoDB table for Terraform state and setting up AWS credentials for the specified region.

cd networking/
terraform {
  backend "s3" {
    bucket         = "my-project-terraform-state-prod"  # Replace with your actual bucket name
    key            = "networking/terraform.tfstate"    # Unique state file key for networking module
    region         = "us-east-1"                       # Replace with your AWS region
    dynamodb_table = "my-project-terraform-lock"       # Replace with your actual DynamoDB table name
    encrypt        = true
  }
}

terraform init
terraform apply

Image description

Now that VPC and associated networking resources are created, we are going to proceed with ECS part.

Image description

ECS Infrastructure

In order to start application deployment process we of course need ECS Fargate cluster to be created, but even before that we need to push our docker images to ECR, so we will start with creating AWS ECR image repository manually, then build and push the image and then we can import it to terraform later on. Simply go to AWS ECR console and create one ECR private repository. For this tutorial we name it “web-app-repository”.
Then cd into ECS folder and build new docker image:

cd ecs/docker_dev (then docker_prod folder)

Build Dockerfile to make docker image for DEV and PROD environments and then push the images to your ECR repo. Use instructions in AWS ECR console:

Image description

Run these commands for PROD:

aws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin 1234567.dkr.ecr.us-east-1.amazonaws.com
docker build -t web-app .

docker tag web-app:latest 1234567.dkr.ecr.us-east-1.amazonaws.com/web-app-repository:main-latest

docker push 124567.dkr.ecr.us-east-1.amazonaws.com/web-app-repository:main-latest

Do the same for DEV:

cd ../docker_dev/

aws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin 1234567.dkr.ecr.us-east-1.amazonaws.com

docker build -t web-app .

docker tag web-app:latest 1234567.dkr.ecr.us-east-1.amazonaws.com/web-app-repository:dev-latest

docker push 1234567.dkr.ecr.us-east-1.amazonaws.com/web-app-repository:dev-latest

Go back to ecs folder, update your provider.tf with backend s3 bucket for your ecs and use data from networking state backend in order to dynamically use networking components such as vpc, subnets etc.
provider.tf

terraform {
  required_version = ">= 1.6.0"

  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 5.73.0"
    }

    random = {
      source  = "hashicorp/random"
      version = "~> 3.6.0"
    }
  }

  backend "s3" {
    bucket         = "my-project-terraform-state-prod" # Replace with your S3 bucket name
    key            = "ecs/terraform.tfstate"           # Unique key for ECS state file
    region         = "us-east-1"                       # Replace with your AWS region
    dynamodb_table = "my-project-terraform-lock"       # Replace with your DynamoDB table name
    encrypt        = true
  }
}

provider "aws" {
  region = var.region

  # Optional: Default tags that will be applied to all resources
}

data "terraform_remote_state" "vpc" {
  backend = "s3"
  config = {
    bucket         = "my-project-terraform-state-prod" # Replace with your S3 bucket name
    key            = "networking/terraform.tfstate"    # Path to the networking state file
    region         = "us-east-1"                       # Replace with your AWS region
    dynamodb_table = "my-project-terraform-lock"       # Replace with your DynamoDB table name
    encrypt        = true
  }
}

After configuring backend, update your account values in terraform.tfvars

# Environment and Region
env_name_prod = "prod"
env_name_dev  = "dev"       # Change to "prod" for production
region        = "us-east-1" # AWS region

# Application Details
app_name = "web-app" # Your application name

# AWS Account Details
account_id = "12345678" # Your AWS account ID

# ALB Configuration
alb_hostname_dev  = "web-app-dev.12345678.realhandsonlabs.net"
alb_hostname_prod = "web-app-prod.12345678.realhandsonlabs.net"

# Subnet and VPC IDs

route53_zone_id = "Z07843433K5DR356RSU8Z"

Run terraform apply to create all infrastructure.

Image description

Setting Up GitLab CI/CD

Check gitlab_cicd folder structure for more details:
Image description

Your Gitlab folder structure tree should eventually look like this

Image description

We will build a GitLab CI/CD pipeline from scratch to automate deployments to AWS ECS Fargate utilizing Docker containers, AWS ECR for image storage, and GitLab YAML for configuration.

Pipeline Structure: Two branches (dev and prod) dynamically define the target ECS service/task based on the environment name and deploy the application. Stages: Automate validation, build, push, and deployment through defined CI/CD stages.

GitLab IAM user

  1. Create an IAM user ‘gitlab-cicd’ give the below permissions. This user doesn’t need programmatic console access only. Generate Access and Secret keys for this user.

Image description

  1. Log in to your Gitlab account and create a new project

Image description

  1. Log in to your linux terminal and git initialize new repository

Image description

  1. You will need to create two branches ‘dev’ and ‘main’:

Image description

Protect both of your branch from deletion as well as push without merge

Image description

Set 3 variables for your pipeline, Add the following variables in GitLab CI/CD settings:

AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
AWS_DEFAULT_REGION

Image description

Image description

Pipelines setup

You can copy all CI/CD files from my gitlab repo https://gitlab.com/sahib.gasimov2/gitlabcicd-ecs .

The .gitlab-ci.yml file automates the entire pipeline for deploying a web app on AWS ECS. It validates the environment, builds and pushes Docker images to AWS ECR, and deploys to ECS services (dev branch auto-deploys, main branch requires manual approval for production). It uses environment variables for flexibility and integrates ECS updates with forced new deployments for both dev and prod environments.

Update AWS_ACCOUNT_NUMBER and REGION with your values.

stages:
    - validate_environment
    - build_and_publish
    - deploy_to_dev
    - deploy_to_production    
    - finalize_pipeline

workflow:  # Trigger pipeline only for specific branches
  rules:
    - if: '$CI_COMMIT_BRANCH == "main"'
    - if: '$CI_COMMIT_BRANCH == "dev"'

image: registry.gitlab.com/gitlab-org/cloud-deploy/aws-base:latest

variables:
  AWS_ACCOUNT_NUMBER : "1245678" #update with your account number
  REGION             : "us-east-1" #update with your region 
  IMAGE_REPOSITORY   : "web-app-repository"
  CLUSTER_NAME       : "web-app-cluster" 

  DEV_SERVICE_NAME   : "web-app-dev" 
  DEV_TASK_NAME      : "web-app-dev"

  PROD_SERVICE_NAME  : "web-app-prod"
  PROD_TASK_NAME     : "web-app-prod"

test_environment:
  stage: validate_environment
  script:
    - echo "Validating environment setup..."
    - aws --version
    - docker --version
    - jq --version
    - aws sts get-caller-identity  

build_and_push:
  stage: build_and_publish
  services:
    - docker:dind
  variables:
    DOCKER_HOST: tcp://docker:2375  
  before_script:
   - aws ecr get-login-password --region $REGION | docker login --username AWS --password-stdin $AWS_ACCOUNT_NUMBER.dkr.ecr.$REGION.amazonaws.com

  script:
    - echo "Building the Docker image..."
    - docker build -t $IMAGE_REPOSITORY .
    - echo "Tagging the image..."
    - docker tag $IMAGE_REPOSITORY:latest $AWS_ACCOUNT_NUMBER.dkr.ecr.$REGION.amazonaws.com/$IMAGE_REPOSITORY:$CI_COMMIT_BRANCH-latest
    - docker tag $IMAGE_REPOSITORY:latest $AWS_ACCOUNT_NUMBER.dkr.ecr.$REGION.amazonaws.com/$IMAGE_REPOSITORY:$CI_COMMIT_BRANCH-$CI_COMMIT_SHORT_SHA
    - echo "Pushing the image to ECR..."
    - docker push $AWS_ACCOUNT_NUMBER.dkr.ecr.$REGION.amazonaws.com/$IMAGE_REPOSITORY:$CI_COMMIT_BRANCH-latest
    - docker push $AWS_ACCOUNT_NUMBER.dkr.ecr.$REGION.amazonaws.com/$IMAGE_REPOSITORY:$CI_COMMIT_BRANCH-$CI_COMMIT_SHORT_SHA

deploy_to_dev:
  stage: deploy_to_dev
  rules:
    - if: '$CI_COMMIT_BRANCH == "dev"'
  script:
    - echo "Deploying to development environment..."
    - |
      aws ecs update-service 
        --cluster         $CLUSTER_NAME 
        --service         $DEV_SERVICE_NAME 
        --task-definition $DEV_TASK_NAME 
        --force-new-deployment

deploy_to_production:
  stage: deploy_to_production
  when: manual                                  # Require manual confirmation for production
  manual_confirmation: 'Proceed with production deployment?' 
  allow_failure: false                          # Must succeed to continue
  rules:
    - if: '$CI_COMMIT_BRANCH == "main"'
  script:
    - echo "Deploying to production environment..."
    - |
      aws ecs update-service 
        --cluster         $CLUSTER_NAME 
        --service         $PROD_SERVICE_NAME 
        --task-definition $PROD_TASK_NAME 
        --force-new-deployment        

finalize_pipeline:
  stage: finalize_pipeline
  script:
    - echo "CI/CD Pipeline completed successfully!"

Once you create a branch and merge the change to either dev or main branch, the branch will trigger the pipeline and will run and update the application.

Image description

Image description

Python Application and Dockerfile

I will be using a simple Python application for this demo, which simply converts image to PDF.

from flask import Flask, request, send_file, jsonify
from PIL import Image
import os

app = Flask(__name__)
UPLOAD_FOLDER = 'uploads'
OUTPUT_FOLDER = 'output'
os.makedirs(UPLOAD_FOLDER, exist_ok=True)
os.makedirs(OUTPUT_FOLDER, exist_ok=True)

@app.route('/')
def home():
    return '''
    <!doctype html>
    <title>Image to PDF Converter</title>
    <h1>Upload an image to convert to PDF</h1>
    <form method="post" action="/convert" enctype="multipart/form-data">
        <input type="file" name="image" accept="image/*">
        <button type="submit">Convert to PDF</button>
    </form>
    '''

@app.route('/convert', methods=['POST'])
def convert_to_pdf():
    if 'image' not in request.files:
        return "No file uploaded", 400

    file = request.files['image']
    if file.filename == '':
        return "No selected file", 400

    try:
        # Save the uploaded file
        image_path = os.path.join(UPLOAD_FOLDER, file.filename)
        file.save(image_path)

        # Convert to PDF
        image = Image.open(image_path)
        if image.mode != 'RGB':
            image = image.convert('RGB')
        pdf_path = os.path.join(OUTPUT_FOLDER, f"{os.path.splitext(file.filename)[0]}.pdf")
        image.save(pdf_path, "PDF")

        # Send the PDF file as a response
        return send_file(pdf_path, as_attachment=True)
    except Exception as e:
        return f"An error occurred: {e}", 500

# Health check endpoint for ALB
@app.route('/api/health', methods=['GET'])
def health_check():
    return jsonify({"status": "healthy"}), 200

if __name__ == '__main__':
    app.run(host='0.0.0.0', port=8080, debug=True)

Dockerfile

# Use an official Python runtime as the base image
FROM python:3.9-slim

# Set the working directory in the container
WORKDIR /app

# Copy the application code and requirements into the container
COPY . /app

# Install dependencies
RUN pip install --no-cache-dir -r requirements.txt

# Expose port 8080 for the Flask application
EXPOSE 8080

# Run the Flask application
CMD ["python", "app.py"]

Enjoy your project!

Please follow and like us:
Pin Share