Comprehensive Guide: AWS EKS deployment using Github Actions, Docker and ArgoCD for React Applications

Joel Wembo
14 min readApr 16, 2024

Amazon EKS is a managed service, meaning that AWS takes care of much of the underlying infrastructure and configuration. This means that you do not need to worry about setting up Kubernetes clusters, configuring and managing nodes, or installing and managing Kubernetes services.

Comprehensive Guide: AWS EKS deployment using Github Actions and ArgoCD for React Application DevOps Architectural Diagram

Abstract

Deploying frontend application to Kubernetes using Amazon EKS provides developers with a scalable and reliable way to run their applications. By leveraging the benefits of containerization and Kubernetes, developers can ensure that their applications are running efficiently and securely.

Amazon EKS is a managed service, meaning that AWS takes care of much of the underlying infrastructure and configuration. This means that you do not need to worry about setting up Kubernetes clusters, configuring and managing nodes, or installing and managing Kubernetes services.

Kubernetes is currently the de-facto standard for deploying applications in the cloud. Every major cloud provider offers a dedicated Kubernetes service (eg. Google Cloud with GKE, AWS with EKS, etc.) to deploy applications in a Kubernetes cluster.

Table of Contents

· Abstract
· Table of Contents
· Prerequisites:
· 1. Create AWS Access Keys
· 2. Dockerize the React application
· 2.1. Create the react Application
· 2.2. add deployment.yaml in your project
· 2.3 . Load balancer External IP ( Optional )
· 3. AWS EKS Cluster Provisioning Using AWS CLI, EKS CLI and Github Actions
· 4. Check your CloudFormation Stack
· 5. ArgoCD
· Lets go with Service Type Load Balancer.
· Clean up
· Summary
· References

In this article, we will guides you through setting up a comprehensive CI/CD pipeline using Github actions workflows, Docker, ArgoCD and AWS EKS. It covers provisioning an EKS cluster instance using your ubuntu machine and github actions, adding aws and Docker hub credentials, installing Docker, build react application for production, performing declarative continuous delivery using ArgoCD in EKS Cluster. By following this technical guide, you’ll gain hands-on experience in automating the build, test, and deployment processes of your applications.

There are many reasons for choosing Kubernetes for deploying your React application:

  • unified and standardized deployment model across the cloud providers
  • robustness against downtime as several containers are deployed (horizontal scaling)
  • handling peak traffic with auto-scaling
  • zero-downtime deployments, canary deployments, etc.
  • simple A/B testing
Amazon EKS

Prerequisites:

Before we get into the good stuffs, first we need to make sure we have the required services on our local machine or dev server, which are:

  1. AWS Account
  2. Github Account
  3. AWS CLI installed and configured.
  4. EKS CLI
  5. Docker installed locally.
  6. NPM
  7. NodeJS
  8. Terraform
  9. Basic Understanding of Github Actions and Kubernetes
  10. A Domain name Hosted from any domain name provider ( Ex: AWS Route 53 )
  11. Basic familiarity with YAML and GitHub workflows.
  12. Basic knowledge of HTML or React
  13. Any Browser for testing

You can follow along with this source code:

1. Create AWS Access Keys

AWS access keys are credentials used to access Amazon Web Services (AWS) programmatically. They consist of an access key ID and a secret access key. These keys are used to authenticate requests made to AWS services via APIs, SDKs, command-line tools, and other means.

Steps to Create Access Keys

  1. Go to the AWS management console, click on your Profile name, and then click on My Security Credentials. …
  2. Go to Access Keys and select Create New Access Key. …
  3. Click on Show Access Key and save/download the access key and secret access key.

2. Dockerize the React application

Docker is a popular platform for developing, shipping, and running applications.

Docker provides developers and organizations with a flexible, efficient, and scalable platform for building, deploying, and managing applications in various environments.

2.1. Create the react Application

React is a free and open-source front-end JavaScript library for building user interfaces based on components. It is maintained by Meta and a community of individual developers and companies.

Step 1 : Create your React Application:

To start, Create your repository in github account as follow :

let sets up your development environment so that you can use the latest JavaScript features, provides a nice developer experience, and optimizes your app for production. You’ll need to have Node >= 14.0.0 and npm >= 5.6 on your machine. To create a project, run:

npx create-react-app prodx-reactwebui-react-demo-1
cd prodx-reactwebui-react-demo-1
npm start

To get ahead, You can download the source code here:

Step 2: Add Dockerfile in your root directory

# Use an official Node runtime as a parent image
FROM node:19-alpine as build
# Set the working directory to /app
WORKDIR /app
# Copy the package.json and package-lock.json to the container
COPY package*.json ./
# COPY public ./

# Install dependencies
RUN npm install
# Copy the rest of the application code to the container
COPY . .
# Build the React app
RUN npm run build
# Use an official Nginx runtime as a parent image
FROM nginx:1.21.0-alpine
# Copy the ngnix.conf to the container
COPY ngnix.conf /etc/nginx/conf.d/default.conf
# Copy the React app build files to the container
COPY --from=build /app/build /usr/share/nginx/html
# Expose port 80 for Nginx
EXPOSE 80
# Start Nginx when the container starts
CMD ["nginx", "-g", "daemon off;"]

Step 3: Add docker-compose in your root directory

version: '3'

services:
my-react-app:
build:
context: .
dockerfile: Dockerfile
ports:
- "80:80"
volumes:
- ./ngnix.conf:/etc/nginx/conf.d/default.conf

Step 4: add nginx.conf in your root directory

server {
listen 80;
server_name a611c18c9b1194c41880bbf59a2630bb-735723108.us-east-1.elb.amazonaws.com;
root /usr/share/nginx/html;
index index.html;
location / {
try_files $uri $uri/ /index.html;
}
}

Step 5 : Use the following command to compile

docker build -t joelwembo/prodxcloud:latest .
docker run -p 80:80 --name react joelwembo/prodxcloud:latest
# or simple docker-compose up for local testing first

Step 6 : Push your image to docker hub

docker push joelwembo/prodxcloud:latest

2.2. add deployment.yaml in your project

A Deployment manages a set of Pods to run an application workload, usually one that doesn’t maintain state.

apiVersion: apps/v1
kind: Deployment
metadata:
name: prodxcloud-deployment
spec:
replicas: 3
selector:
matchLabels:
app: prodxcloud
template:
metadata:
labels:
app: prodxcloud
spec:
containers:
- name: prodxcloud
image: joelwembo/prodxcloud:latest
imagePullPolicy: Always
ports:
- containerPort: 80

2.3 . Load balancer External IP ( Optional )

The load balancer tracks the availability of pods with the Kubernetes Endpoints API. When it receives a request for a specific Kubernetes service, the Kubernetes load balancer sorts in order or round robins the request among relevant Kubernetes pods for the service. Since we are EKS this stage will be created by EK

apiVersion: v1
kind: Service
metadata:
name: prodxcloud
spec:
# externalIPs:
# - 18.141.186.159
ports:
- port: 8080
protocol: TCP
targetPort: 80
nodePort: 31000
selector:
app: prodxcloud
type: LoadBalancer
status:
loadBalancer: {}

3. AWS EKS Cluster Provisioning Using AWS CLI, EKS CLI and Github Actions

In this step, we are going to automate the provisioning of AWS EKS using GitHub actions.

GitHub Actions is a feature provided by GitHub that allows you to automate various tasks within your software development workflows directly from your GitHub repository. It enables you to build, test, and deploy your code directly within GitHub’s ecosystem.

3.1 Now, let’s setup our AWS environment. We will be using Terraform to create our infrastructure. We will be creating the following main resources:

  • Amazon EKS
  • Amazon VPC
  • Load Balancer
  • IAM roles and policies

Project Folder Structure

Current Project Structure

First, let setup our GitHub Actions and Environment settings

  1. On GitHub.com, navigate to the main page of the repository.
  2. Under your repository name, click Settings. …
  3. In the “Security” section of the sidebar, select Secrets and variables, then click Actions.
  4. Click the Secrets tab.
  5. Click New repository secret.
Actions Secrets and variables
Adding Docker Credentials in Github Actions

3.2 Github Workflows

A workflow is a configurable automated process that will run one or more jobs. Workflows are defined by a YAML file checked in to your repository and will run when triggered by an event in your repository, or they can be triggered manually, or at a defined schedule.

In the .github/workflows directory, create a file with the .yml or .yaml extension. This tutorial will use deploy.yaml as the file name.

The initial part of the code consisted of settings permissions and environment variables:

name: CI/CD Pipeline for React App to AWS EKS

on:
push:
branches: ['master' , 'main']
pull_request:
branches: [ "master" , 'main']

permissions:
contents: write
env:
DOCKERHUB_USERNAME: ${{ secrets.DOCKER_USERNAME }}
DOCKERHUB_TOKEN: ${{ secrets.DOCKER_PASSWORD }}
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_DEFAULT_REGION: "us-east-1"

Second , we need to build the docker the image and publish to DockerHub

 push-docker-image:
name: Build Docker image and push to repositories
# run only when code is compiling and tests are passing
runs-on: ubuntu-latest
needs: ['build']
# steps to perform in job
steps:
- name: Checkout code
uses: actions/checkout@v3
- name: Login to DockerHub
uses: docker/login-action@v2
with:
username: ${{ env.DOCKER_USERNAME }}
password: ${{ env.DOCKER_TOKEN }}
- run: docker build -t joelwembo/prodxcloud:latest --no-cache .
- run: docker push joelwembo/prodxcloud:latest

Last, we need to deploy and get the Load balancer details related to our service

 deploy:
runs-on: ubuntu-latest
needs: ['build', 'push-docker-image', 'provision-aws-eks-cluster']
steps:
- name: AWS EKS Deployment
uses: actions/checkout@v3

- name: Pull the Docker image
run: docker pull joelwembo/prodxcloud:latest

- name: Update kubeconfig
run: aws eks --region us-east-1 update-kubeconfig --name my-demo-cluster
env:
AWS_ACCESS_KEY_ID: ${{ env.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ env.AWS_SECRET_ACCESS_KEY }}
AWS_DEFAULT_REGION: ${{ env.AWS_DEFAULT_REGION}}

- name: apply deployment
run: kubectl apply -f deployment.yaml
env:
AWS_ACCESS_KEY_ID: ${{ env.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ env.AWS_SECRET_ACCESS_KEY }}
AWS_DEFAULT_REGION: ${{ env.AWS_DEFAULT_REGION}}

- name: Deploy
run: kubectl expose deployment prodxcloud-deployment --type=LoadBalancer --name=my-service
env:
AWS_ACCESS_KEY_ID: ${{ env.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ env.AWS_SECRET_ACCESS_KEY }}
AWS_DEFAULT_REGION: ${{ env.AWS_DEFAULT_REGION}}

Here is the complete file

# This workflow will do a clean installation of node dependencies, cache/restore them, build the source code and run tests across different versions of node
# For more information see: https://docs.github.com/en/actions/automating-builds-and-tests/building-and-testing-nodejs
name: CI/CD Pipeline for React App to AWS EKS

on:
push:
branches: ['master' , 'main']
pull_request:
branches: [ "master" , 'main']

permissions:
contents: write
env:
DOCKERHUB_USERNAME: ${{ secrets.DOCKER_USERNAME }}
DOCKERHUB_TOKEN: ${{ secrets.DOCKER_PASSWORD }}
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_DEFAULT_REGION: "us-east-1"

jobs:
build:
runs-on: ubuntu-latest
strategy:
matrix:
node-version: [18]
# See supported Node.js release schedule at https://nodejs.org/en/about/releases/

steps:
- uses: actions/checkout@v3
- name: Use Node.js ${{ matrix.node-version }}
uses: actions/setup-node@v3
with:
node-version: ${{ matrix.node-version }}
cache: 'npm'
# - run: npm ci
# - run: npm run build --if-present
# - run: npm list

push-docker-image:
name: Build Docker image and push to repositories
# run only when code is compiling and tests are passing
runs-on: ubuntu-latest
needs: ['build']
# steps to perform in job
steps:
- name: Checkout code
uses: actions/checkout@v3

# setup Docker build action
# - name: Set up Docker Buildx
# id: buildx
# uses: docker/setup-buildx-action@v2

- name: Login to DockerHub
uses: docker/login-action@v2
with:
username: ${{ env.DOCKER_USERNAME }}
password: ${{ env.DOCKER_TOKEN }}
- run: docker build -t joelwembo/prodxcloud:latest --no-cache .
- run: docker push joelwembo/prodxcloud:latest

provision-aws-eks-cluster:
runs-on: ubuntu-latest
needs: ['build', 'push-docker-image']
steps:
- name: AWS EKS Deployment
uses: actions/checkout@v3

- name: Create eks cluster
run: aws eks create-cluster --region us-east-1 --name my-demo-cluster --role-arn < from user_account > --resources-vpc-config < here >
env:
AWS_ACCESS_KEY_ID: ${{ env.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ env.AWS_SECRET_ACCESS_KEY }}
AWS_DEFAULT_REGION: ${{ env.AWS_DEFAULT_REGION}}


deploy:
runs-on: ubuntu-latest
needs: ['build', 'push-docker-image', 'provision-aws-eks-cluster']
steps:
- name: AWS EKS Deployment
uses: actions/checkout@v3

- name: Pull the Docker image
run: docker pull joelwembo/prodxcloud:latest

- name: Update kubeconfig
run: aws eks --region us-east-1 update-kubeconfig --name my-demo-cluster
env:
AWS_ACCESS_KEY_ID: ${{ env.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ env.AWS_SECRET_ACCESS_KEY }}
AWS_DEFAULT_REGION: ${{ env.AWS_DEFAULT_REGION}}

- name: Apply deployment
run: kubectl apply -f deployment.yaml
env:
AWS_ACCESS_KEY_ID: ${{ env.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ env.AWS_SECRET_ACCESS_KEY }}
AWS_DEFAULT_REGION: ${{ env.AWS_DEFAULT_REGION}}

- name: Expose service
run: kubectl expose deployment prodxcloud-deployment --type=LoadBalancer --name=my-service
env:
AWS_ACCESS_KEY_ID: ${{ env.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ env.AWS_SECRET_ACCESS_KEY }}
AWS_DEFAULT_REGION: ${{ env.AWS_DEFAULT_REGION}}

- name: Load Balancer DNS
run: kubectl get services my-service
env:
AWS_ACCESS_KEY_ID: ${{ env.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ env.AWS_SECRET_ACCESS_KEY }}
AWS_DEFAULT_REGION: ${{ env.AWS_DEFAULT_REGION}}

3.3 Push your changes to your repositories

Here is the running react app to EKS

Comprehensive Guide: AWS EKS deployment using Github Actions, Docker and ArgoCD for React Applications ( Made easy )

4. Check your CloudFormation Stack

Service Fully Created

Check your EKS cluster in AWS management console

Running Pods

5. ArgoCD

From your local ubuntu or windows machine you check your running pods and service

EKS Cluster is up and ready.

Now lets install ArgoCD in EKS Cluster

# This will create a new namespace, argocd, where Argo CD services and application resources will live.
kubectl create namespace argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml

Download Argo CD CLI

curl -sSL -o argocd-linux-amd64 https://github.com/argoproj/argo-cd/releases/latest/download/argocd-linux-amd64
sudo install -m 555 argocd-linux-amd64 /usr/local/bin/argocd
rm argocd-linux-amd64

You can also work with your ArgoCD CLI using any ubuntu server at this state of the tutorial

To login to argocd , get the initial password using the next command:

argocd admin initial-password -n argocd

Change argocd password

kubectl -n argocd patch secret argocd-secret -p '{"stringData": { "admin.password": "$2a$10$rRyBsGSHK6.uc8fntPwVIuLVHgsAhAX7TcdrqW/RADU0uh7CaChLa",  "admin.passwordMtime": "'$(date +%FT%T%Z)'" }}'

You can view your newly created password or secrets here in AWS EKS Control plane :

Because Kubernetes deploys services to arbitrary network addresses inside your cluster, you’ll need to forward the relevant ports in order to access them from your local machine. Argo CD sets up a service named argocd-server on port 443 internally. Because port 443 is the default HTTPS port, and you may be running some other HTTP/HTTPS services, it’s common practice to forward those to arbitrarily chosen other ports, like 8080, like so:

Lets go with Service Type Load Balancer.

# Change the argocd-server service type to LoadBalancer.
kubectl patch svc argocd-server -n argocd -p '{"spec": {"type": "LoadBalancer"}}'
Comprehensive Guide: AWS EKS deployment using Github Actions, Docker and ArgoCD for React Applications ( Made easy )
Argo CD pods
Argo CD Server

Next, let get the load balancer DNS URL

kubectl get svc -n argocd

We can now obtain the External IP of our ArgoCD server

Access ArgoCD Using Username admin and password : password

Enter Repository URL, set path to ./ , Cluster URL to https://kubernetes.default.svc, namespace to default and click save.

Clean up

## Delete the service and cluster

kubectl delete -f deployment.yaml
kubectl delete svc my-service
eksctl delete cluster - name my-demo-cluster

Considerations for deploying a React application to AWS EKS using GitHub Actions and ArgoCD:

Automation and Efficiency:

  • By automating the deployment process with GitHub Actions and ArgoCD, you streamline the workflow from code commit to production deployment. This reduces manual errors and speeds up the delivery of new features.

Scalability and Flexibility:

  • AWS EKS provides a scalable and managed Kubernetes service, allowing you to easily scale your React application as demand grows. Kubernetes also

Summary

Setting up a CI/CD pipeline for a React application, containerizing it with Docker, and deploying it to AWS EKS provides a streamlined workflow for development, testing, and deployment. By leveraging GitHub Actions for CI and AWS EKS for deployment, teams can automate the build, test, and deployment processes, leading to faster delivery of features and improvements. With the Docker containerization, the application becomes portable and can be deployed consistently across different environments. Additionally, AWS EKS offers scalability and reliability through Kubernetes orchestration, enabling efficient management of containerized applications. By implementing these best practices, teams can enhance collaboration, accelerate time-to-market, and ensure the stability and scalability of their applications.

You can also find the codes on Github here.

Thank you for Reading !! 🙌🏻, don’t forget to subscribe and give it a CLAP 👏see you in the next article.🤘

About me

I am Joel O’Wembo, AWS certified cloud architect, Back-end developer, and AWS Community Builder, I‘m based in the Philippines 🇵🇭 I bring a powerful combination of expertise in cloud architecture, DevOps practices, and a deep understanding of high availability (HA) principles. I leverage my knowledge to create robust, scalable cloud applications using open-source tools for efficient enterprise deployments.”

For more information about the author ( Joel O. Wembo ) visit:

Links:

References

--

--

Joel Wembo

I am a Cloud Solutions Architect, I provide IT solutions using AWS, AWS CDK, Kubernetes, Serverless and Terraform. https://www.linkedin.com/in/joelotepawembo