GitOps for CloudFront and S3

GitOps for CloudFront and S3

Front-end deployments in GitOps style

This strategy uses ArgoCD, EKS, and AWS services like CloudFront and S3 to streamline deployments, improve performance, and maintain best practices.

Table of Contents

Building and Syncing to S3
Deploying with Helm Chart and Kubernetes Job

In the world of modern web development, deploying front-end applications efficiently and reliably is a key challenge. As teams adopt GitOps strategies to streamline and automate deployments, certain complexities arise, particularly when integrating with AWS services like CloudFront and S3.

So let’s consider that ideally all our workloads are containerized and all run on Kubernetes (EKS) platform, we have all security checks, automations, tests and pipelines in place, and already have leveraged ArgoCD and supplementary tools for deployments.

Now one common dilemma is deciding how to manage front-end deployments consistently in GitOps style while maintaining the benefits of using CloudFront for caching and performance optimization. Some teams consider moving front-end assets to containers for consistency, but this can introduce unnecessary complexity and deviate from best practices.

When employing a centralized GitOps strategy, it’s crucial to keep the deployment process consistent and manageable. However, front-end applications often require specific considerations:

Caching and Performance: CloudFront provides a robust solution for caching and delivering static assets, ensuring high performance and low latency.
Artifact Management: Synchronizing build artifacts to the correct S3 paths while managing different versions can be challenging.
Deployment Automation: Automating the deployment process while ensuring the correct paths and versions are updated in CloudFront.
Consistency and Reproducibility: Maintaining consistent and reproducible deployments across environments.
Easy and rapid Rollbacks — if possible ofc. 😌


In this article, I will share a solution I implemented to address these challenges. This approach leverages ArgoCD, EKS, AWS CloudFront, and S3, integrating them seamlessly into a GitOps workflow. By using a Kubernetes job with AWS CLI, we can manage CloudFront paths dynamically, ensuring our front-end application is always up-to-date and efficiently delivered.

Release branch is merged to main
New release tag created from main
GHA is triggered on release to test and build the code
Generated artifacts are tagged and synced to s3 with respective path
Developer creates a pull request to pass the new version to GitOps repo
PR is merged and values file is updated with new version
ArgoCD picks up the changes after being triggered via Webhook or by polling
Values diff triggered a new job creation by ArgoCD
Kubernetes Job sends api call to CF to swap the origin path based on the new version

Building and Syncing to S3

To deploy our front-end application, we use GitHub Actions to handle the build and deployment process. The workflow triggers on new version tags, checks out the repository, sets up build environment, and configures AWS credentials. It retrieves secrets, installs dependencies, runs tests, builds the application, and syncs the output to an S3 bucket. We can of-course have multiple parallel workflows for each environment and sync to different s3 buckets with dedicated path that reflects release tag, or even sync to the same s3 bucket with dedicated path that reflects target environment + release tag ( I like the 1st option better for total segregation ).

For instance, if we’re deploying a version tagged v1.0.0 to the production environment, the path in S3 would be s3://frontend-production-artifacts/production/v1.0.0.

name: Publish Release Artifact Version
run-name: Production Release ${{ github.ref_name }} | Publishing Artifact Version | triggered by @${{ }}”


ARTIFACTS_BUCKET: frontend-production-artifacts
AWS_REGION: us-east-2
ENVIRONMENT: production

runs-on: ubuntu-latest

name: Checkout repository
uses: actions/checkout@v4

name: Set up Node.js
uses: actions/setup-node@v4
node-version: 20′

name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@v4
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ env.AWS_REGION }}

name: Get Secrets
uses: aws-actions/aws-secretsmanager-get-secrets@v2
secret-ids: frontend-secrets-${{ env.ENVIRONMENT }}
parse-json-secrets: true

name: Install dependencies
run: yarn install

name: Run tests
run: yarn test

name: Build
run: yarn build

name: Sync files to Artifacts bucket
run: aws s3 sync build/ s3://${{ env.ARTIFACTS_BUCKET }}/${{ env.ENVIRONMENT }}/${{ github.ref_name }} –delete

Deploying with Helm Chart and Kubernetes Job

To automate the deployment process further, we can use a Helm chart that defines a Kubernetes job. This job handles updating the CloudFront origin path for the new version of our application using a Docker image with AWS CLI installed.

We have a values file that provides parameters like the application name, version, Docker image, S3 bucket name, and AWS region:

region: us-east-2″

name: store-ui”
version: v1.0.10
backOffLimit: 4″
jobImage: amazon/aws-cli:2.16.1″
originS3: frontend-production-artifacts”

The Kubernetes job uses these values to dynamically set its configuration. It includes the job name, the container image, and environment variables for the S3 bucket, origin path, and AWS region.

When the job runs, it installs jq for JSON processing, retrieves the CloudFront distribution ID based on the S3 bucket name, fetches the current CloudFront configuration, updates the origin path to the new version, and invalidates the CloudFront cache to ensure the latest version is served to users. Of course you can always build your own lightweight docker image with all dependencies already installed (aws-cli and jq), or even build your own solution by leveraging AWS SDK directly.

apiVersion: batch/v1
kind: Job
name: swap-cf-origin-path-{{ }}-{{ }}
serviceAccountName: {{ }}
name: aws-cli
image: {{ }}
value: {{ }}
value: /{{ .Release.Namespace }}/{{ }}
value: {{ }}
command: [/bin/sh”,-c”]
set -e
yum install jq -y

CF_DIST_ID=$(aws cloudfront list-distributions –query “DistributionList.Items[?contains(Origins.Items[].DomainName, ‘${S3_BUCKET_NAME}.s3.${AWS_REGION}’)].Id | [0]” –output text)

OUTPUT=$(aws cloudfront get-distribution-config –id $CF_DIST_ID)
ETAG=$(echo “$OUTPUT” | jq -r ‘.ETag’)
DIST_CONFIG=$(echo “$OUTPUT” | jq ‘.DistributionConfig’)

UPDATED_CONFIG=$(echo “$DIST_CONFIG” | jq –arg path “${ORIGIN_PATH}” ‘.Origins.Items[0].OriginPath = $path’)

aws cloudfront update-distribution –id $CF_DIST_ID –if-match $ETAG –distribution-config “$UPDATED_CONFIG”

aws cloudfront create-invalidation –distribution-id $CF_DIST_ID –paths “/*”
restartPolicy: Never
backoffLimit: {{ }}

To allow our Kubernetes job to interact with AWS services like CloudFront and S3, we need to grant it the necessary permissions to the job’s service account. We can achieve this by using IAM Roles for Service Accounts (IRSA) or Pod identities. Here’s how you can configure IRSA option using Terraform. This setup allows the Kubernetes job to securely perform actions required for updating the CloudFront origin path and invalidating the cache.

data “aws_iam_policy_document” “service_account_assume_role” {
statement {
actions = [“sts:AssumeRoleWithWebIdentity”]
effect = “Allow”

condition {
test = “StringEquals”
variable = “${replace(aws_iam_openid_connect_provider.oidc_provider_sts.url, “https://”, “”)}:aud”
values = [“”]

condition {
test = “StringEquals”
variable = “${replace(aws_iam_openid_connect_provider.oidc_provider_sts.url, “https://”, “”)}:sub”
values = [“system:serviceaccount:${var.namespace}:store-ui”]

principals {
identifiers = [aws_iam_openid_connect_provider.oidc_provider_sts.arn]
type = “Federated”

resource “aws_iam_role” “service_account_role” {
assume_role_policy = data.aws_iam_policy_document.service_account_assume_role.json
name = “store-ui-sa-role-${var.namespace}”
tags = local.default_tags

lifecycle {
create_before_destroy = false

resource “aws_iam_policy” “store_ui_swap_origin_policy” {
name = “store-ui-policy-${var.namespace}”
path = “/”
description = “IAM policy for store-ui job service account”

policy = jsonencode({
Version = “2012-10-17”
Statement = [
Action = [
Effect = “Allow”
Resource = “*”

# Attach policies to the service account role
resource “aws_iam_role_policy_attachment” “service_account_role” {
depends_on = [

role =
policy_arn = aws_iam_policy.store_ui_swap_origin_policy.arn


At this point, all we need to do is push the new version after building and syncing it to S3. The job will handle updating the CloudFront origin path and invalidating the cache, ensuring that users always get the latest version of our front-end application. For an even more cosmetically satisfying approach, we could implement an additional Continuous Deployment (CD) solution on top of ArgoCD, such as OctopusDeploy. However, that is a topic for another day and another discussion 😉 Farewell Folks! 😊