DevOps Engineering: End-to-End CI/CD Pipeline for react applications to AWS CloudFront using Ansible, Jenkins, and Terraform

Joel Wembo
Towards AWS
Published in
27 min readApr 19, 2024

--

Jenkins, an open source continuous integration server, is a popular choices for CI/CD pipeline orchestration because it allows you to build, test, and deploy your code continuously and autonomously. When paired with the right tools such Terraform and Ansible, Jenkins can be used to automate end-to-end (E2E) testing for your web applications.

DevOps Engineering: End-to-End CI/CD Pipeline for react applications to AWS CloudFront using Ansible, Jenkins and Terraform Architecture Diagram by Joel Wembo
DevOps Engineering: End-to-End CI/CD Pipeline for react applications to AWS CloudFront using Ansible, Jenkins, and Terraform Architecture Diagram

Table of Contents

· Table of Contents
· Abstract
· Introduction
· Prerequisites:
· Solution overview
· 1. Create AWS Access Keys
· 2. Set Up Custom Domain
· 3. Create Route53 Hosted Zone
· 4. Create your React / Front-End Application
· 5. Provision an AWS EC2 instance with Ansible
· 6. Jenkins Installation
6.1 Finalize your Jenkins installation
6.2 Install Jenkins plugins
6.3 Add AWS Credentials
· 7. Setting up a CI/CD pipeline using Terraform
7.1. Variables definitions
7.2. provider.tf
7.3. Create an S3 Bucket for static content
7.4. backend.tf
7.4.1 S3 bucket for terraform state management
7.4.2 Terraform Cloud Configuration
7.5. Provision AWS Certificate Manager and Validate
7.6. Configure CloudFront Distribution
7.7. Route53
7.8. Setup Bucket Policy:
· 8. Create a new Jenkins Job
· 9. Check result in your AWS Management Console
· Summary
· References

Abstract

Jenkins, an open-source continuous integration server, is a popular choice for CI/CD pipeline orchestration because it allows you to build, test, and deploy your code continuously and autonomously. When paired with the right tools such as Terraform and Ansible, Jenkins can be used to automate end-to-end (E2E) testing for your web applications.

Introduction

Jenkins can be a very good tool for automated testing and deployment, but it’s not the only option, and there are some factors to consider for a good fit:

Advantages of Jenkins for Testing and Deployment:

  • Open Source and Flexible: Being open-source, Jenkins is free to use and offers a high degree of customization through plugins for various testing frameworks and deployment tools.
  • Mature and Feature-Rich: As a well-established tool, Jenkins has a large community and extensive documentation. It supports a wide range of functionalities for building, testing, and deploying applications.
  • CI/CD Pipeline Orchestration: Jenkins excels at orchestrating the entire CI/CD pipeline. You can define stages for building, testing, and deployment, triggering them automatically based on code changes.

Disadvantages of Jenkins to Consider:

  • Steeper Learning Curve: Setting up and configuring Jenkins, especially with complex pipelines, can have a steeper learning curve compared to some newer tools.
  • Scalability Considerations: While Jenkins can handle many builds, managing a large number of concurrent jobs or complex workflows can require significant resources.
  • Maintenance Burden: Keeping Jenkins updated with plugins and security patches can be an ongoing task for your development team.

Is it the right tool for you?

Here’s a breakdown to help you decide:

  • Good fit: If you have a complex testing and deployment process, a large team, and value the customizability of open-source tools, Jenkins can be a great choice.
  • Consider alternatives: If you have a smaller team, a simpler workflow, or prefer a more user-friendly interface, there are managed CI/CD services offered by major cloud providers (AWS CodePipeline, Azure DevOps) or other tools like GitLab CI/CD or CircleCI that might be a better fit.

Ultimately, the best tool depends on your specific needs and preferences. Consider the factors mentioned above and research alternatives to see what best suits your development workflow.

In this post, I explain how to use the Jenkins open-source automation server to deploy AWS CloudFront, ACM for SSL Certification, S3 bucket for static web hosting, and Route53 for custom domain names with Terraform, creating a functioning CI/CD pipeline. When properly implemented, the CI/CD pipeline is triggered by code changes pushed to your GitHub repo, automatically fed into a new Jenkins Job, then the output is deployed on AWS CloudFront and S3.

Prerequisites:

Before we get into the good stuff, first we need to make sure we have the required services on our local machine or dev server, which are:

  1. AWS Account
  2. GitHub Account
  3. AWS CLI installed and configured.
  4. Docker installed locally.
  5. NPM
  6. NodeJS
  7. Terraform
  8. Basic Understanding of Jenkins
  9. A Domain name Hosted by any domain name provider ( Ex: AWS Route 53 )
  10. Basic familiarity with YAML and GitHub workflows.
  11. A React Project hosted in a GitHub repository
  12. Basic knowledge of HTML or React
  13. Any Browser for testing

You can follow along with this source code:

Solution overview

The functioning pipeline creates a fully managed build service that compiles your react application source code. It then produces code build (artifacts) contents that can be used by terraform to deploy to your web application production environment automatically.

Why We Did choose Ansible?

Since We wanted to remotely Configure and scale our AWS EC2 provisioning progressively without interfering with our Terraform State, We went for Ansible to handle the configuration of all the required parameters example environment name, IP addresses, hostnames of connected systems, etc. Red Hat Ansible Automation Platform can be used to define, deploy, and manage a wide variety of AWS services. Even complex AWS environments can be simplified using Ansible Playbooks. Ansible Automation Platform has nearly 100 modules supporting AWS capabilities, including AMI Management.

1. Create AWS Access Keys

AWS access keys are credentials used to access Amazon Web Services (AWS) programmatically. They consist of an access key ID and a secret access key. These keys are used to authenticate requests made to AWS services via APIs, SDKs, command-line tools, and other means.

Steps to Create Access Keys

  1. Go to the AWS management console, click on your Profile name, and then click on My Security Credentials. …
  2. Go to Access Keys and select Create New Access Key. …
  3. Click on Show Access Key and save/download the access key and secret access key.
Create and Sign in your AWS Account
Security Credentials
Create Access Key
Download the AWS Access Key

4. Install and configure the AWS CLI

You can install aws cli using the following command


```
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"

unzip awscliv2.zip

sudo ./aws/install

```

Next, configure your AWS account in your computer using the following command:

aws configure

2. Set Up Custom Domain

To configure a custom domain with CloudFront, you need to create a CloudFront SSL certificate in AWS Certificate Manager (ACM) and associate it with your CloudFront distribution. Then, you can configure Route 53 or your DNS provider to point your custom domain to the CloudFront distribution.

To set up a custom domain using Route 53 with your CloudFront distribution, you’ll need to follow these steps:

Register a Domain: If you haven’t already, register a domain name through Route 53 or another registrar.

Create a Hosted Zone in Route 53: This is where you’ll manage DNS records for your domain.

Create an Alias Record: Alias records are used to map your domain to your CloudFront distribution.

How to register a domain name in Route 53
Route 53 Domain pricing and validation

3. Create Route53 Hosted Zone

A Hosted Zone, in the context of Amazon Web Services (AWS) Route 53, is a container for records that define how you want to route traffic for a specific domain, such as example.com or subdomain.example.com. Route 53 is a scalable Domain Name System (DNS) web service designed to provide reliable and cost-effective domain name resolution.

Hosted Zone creation

4. Create your React / Front-End Application

React is a free and open-source front-end JavaScript library for building user interfaces based on components. It is maintained by Meta and a community of individual developers and companies.

Step 1 : Create your React Application:

To start, Create your repository in GitHub account as follows:

Make sure you upload the changes in your react web UI repository as we will host the infrastructure as code (terraform) in a different repository.

let sets up your development environment so that you can use the latest JavaScript features, provide a nice developer experience, and optimizes your app for production. You’ll need to have Node >= 14.0.0 and npm >= 5.6 on your machine. To create a project, run:

npx create-react-app prodx-reactwebui-react-demo-1
cd prodx-reactwebui-react-demo-1
npm start

To get ahead, You can download the source code here:

Folder Structure

Step 2: Create a Production Build of your application

Let build static contents to serve in AWS S3 Static Web Hosting

npm run build

The above command creates a build directory with a production build of your app. Set up your favorite HTTP server so that a visitor to your site is served index.html, and requests to static paths like /static/js/main.<hash>.js are served with the contents of the /static/js/main.<hash>.js file.

React Mono Repo Project Structure

Step 3: Test the application locally

The generated build folder contains the index.html to serve in aws s3 static web hosting, run the command server to test the build contents for production release.

serve -s build
Testing React application locally
React build directory

With any browser of your choice navigate to http://localhost:3000/

React Application in Dev

We are going to need the build files for our next part , where terraform will upload those contents to s3 using infrastructure as code.

Serving React Build folder with static contents

5. Provision an AWS EC2 instance with Ansible

Ansible is a popular choice for automating EC2 configuration and deployment due to several compelling reasons:

  1. Minimal Setup: Ansible requires minimal setup and doesn’t need additional agents on managed nodes. This simplicity makes it an attractive option for configuration management and automation
  2. SSH Connectivity: Ansible uses standard SSH connectivity to execute automation workflows. This reduces management overhead and facilitates integrating instances into various environments. It can automate the deployment and configuration of ephemeral instances, removing them when no longer needed
Ansible

Provisioning an EC2 instance with Ansible involves several steps:

5.1 Prerequisites:

  1. AWS Account: You’ll need an AWS account with an IAM user configured for programmatic access to EC2. This user should have permissions to launch EC2 instances and related resources.
  2. Ansible Installed: Ensure you have Ansible installed on your local machine.
### Install Ansible in Ubuntu
### Update and Upgrade the server. After upgrade reboot
sudo apt-get upgrade -y
sudo apt-get update
### Add repository for ansible
sudo apt-add-repository ppa:ansible/ansible
### Update packages once more
sudo apt-get update
# Install ansible in Ubuntu
sudo apt-get install ansible -y
### Check pytho installation
python --version
### If python is not installed, install it by executing below command
sudo apt-get install python -y

## install pip and boto3 for aws sdk interaction
sudo apt install python-pip
pip install boto boto3
Ansible Installation in Ubuntu Developer Machine
Ansible in your Ubuntu local system

If you do want to install Ansible on your windows machine, you can follow this link for more details https://docs.ansible.com/ansible/latest/os_guide/windows_faq.html#windows-faq-ansible

3. SSH Keypair: Create an SSH keypair for secure access to the EC2 instance. You’ll need the public key for the playbook and the private key for later SSH access.

From your AWS Management Console, In the navigation panel, under Network & Security, choose Key Pairs. · Choose Create key pair. · For Name, enter a descriptive

Key pair list

To create a key-pair using AWS CLI, type aws ec2 create-key-pair — key-name <your_key_name>, where <your_key_name> is your key’s name by which it would be saved in the AWS. The output for the same is shown below, which is in the JSON format.

aws ec2 create-key-pair --key-name prodxcloud_key.pem < any-name >

5.2 Steps:

  1. Project Directory: Create a directory for your Ansible project.
  2. Ansible Vault (Optional): Ansible Vault allows you to securely store sensitive information like AWS credentials. This is highly recommended.
  3. Playbook Creation: Create an Ansible playbook YAML file (e.g., ec2.yml) defining the EC2 instance configuration.
  4. Playbook Content: The playbook uses the amazon.aws.ec2_instance module to specify instance details like AMI ID, instance type, security group, and user data (optional).
  • Use AWS credentials stored in environment variables or Ansible Vault.

5. Generate SSH keys

Generate SSH keys (to SSH into provisioned EC2 instances) with this command,

# 1. This creates a public (.pub) and private key in the ~/.ssh/ directory
ssh-keygen -t rsa -b 4096 -f ~/.ssh/myKey
Generating public/private rsa key pair.
Enter passphrase (empty for no passphrase): # Can be left blank

# 2. Ensure private key is not publicly viewable
chmod 400 ~/.ssh/mykey

6. Running the Playbook: Execute the playbook using ansible-playbook ec2.yml.

# AWS playbook
---
- hosts: localhost
connection: local
gather_facts: False

vars:
key_name: prodxsecure # Key used for SSH
region: us-east-1
image: ami-0cd59ecaf368e5ccf # AWS AMI for Ec2 instance
id: "prodxcloud-aws-ec2-lab-1"
instance_type: t2.micro # your instance type
sec_group: "prodxcloud-aws-ec2-lab-1" # Security group name

tasks:
- name: Provisioning EC2 instances
block:

- name: Create security group
amazon.aws.ec2_security_group:
name: "{{ sec_group }}"
description: "Sec group for app"
region: "{{ region }}"
rules: # allows ssh on port 22
- proto: tcp
ports:
- 22
cidr_ip: 0.0.0.0/0
rule_desc: allow all on ssh port

- name: Amazon EC2 | Create Key Pair # Create key pair for ssh
amazon.aws.ec2_key:
name: "{{ key_name }}"
region: "{{ region }}"
key_material: "{{ item }}"
with_file: mykey.pub.pub

- name: Start an instance with a public IP address
amazon.aws.ec2_instance:
name: "public-compute-instance"
key_name: "{{ key_name }}"
# vpc_subnet_id: "{{ vpc_id }}"
instance_type: "{{ instance_type }}"
security_group: "{{ sec_group }}"
region: "{{ region }}"
network:
assign_public_ip: true
image_id: "{{ image }}"
tags:
Environment: Testing
tags: ['never', 'create_ec2']

- name: Facts
block:

- name: Get instances facts
ec2_instance_info:
region: "{{ region }}"
register: result

- name: Instances ID
debug:
msg: "ID: {{ item.instance_id }} - State: {{ item.state.name }} - Public DNS: {{ item.public_dns_name }}"
loop: "{{ result.instances }}"
tags: always

This is how you can obtain an AWS AMI ID from your AWS management console:

AWS EC2 Ubuntu AMI ID

To create your VPC and get the subnet ID, the following screenshot and the instructions below:

Open the Amazon VPC console at https://console.aws.amazon.com/vpc/ .

  1. On the VPC dashboard, choose Create VPC.
  2. For Resources to create, choose VPC and more.
  3. Keep Name tag auto-generation selected to create Name tags for the VPC resources or clear it to provide your own Name tags for the VPC resources.

Or, you can again create another ansible-playbook to create first your VPC and subnet ID using Ansible ( Optional at this stage )

YAML
---
- hosts: localhost
connection: local
collections:
- amazon.aws

vars:
vpc_cidr: 10.0.0.0/16
subnet_cidr: 10.0.1.0/24
availability_zone: us-east-1a

tasks:
- name: Create VPC
amazon.aws.ec2_vpc:
cidr_block: "{{ vpc_cidr }}"
state: present
tags:
Name: "My VPC"
register: vpc_output

- name: Create Subnet
amazon.aws.ec2_vpc_subnet:
vpc_id: "{{ vpc_output.vpc.id }}"
cidr_block: "{{ subnet_cidr }}"
availability_zone: "{{ availability_zone }}"
map_public: no # Change to 'yes' for a public subnet
tags:
Name: "My Private Subnet"

become: true

Next, Use the following command to create the create the aws ec2 ubuntu instance

ANSIBLE_LOCALHOST_WARNING=False \
ANSIBLE_INVENTORY_UNPARSED_WARNING=False \
ansible-playbook ansible-playbook-ec2.yml --tags create_ec2

This playbook will provision an EC2 instance and then execute the installer.sh (Optional)script on it to install/configure the desired software. Adjust the playbook variables and script content according to your requirements and environment.

Ansible Playbook
Ansible Playbook Successfully provisioned AWS EC2 with Security Group
This is How to Access your AWS EC instance
Create AWS Key Pair

6. Jenkins Installation

To make our job easier we have prepared a file name installer. sh which contains all the software and packages that will need to be installed such as Jenkins, terraform, and aws cli. later will automate this process with the Ansible playbook

# Java Installations
sudo apt-get install openjdk-11-jdk -y
sudo apt-get install zip -y
echo 'JDK Installed successfully installer'


# Jenkins installations
sudo apt update
apt install make
sudo apt-get install debian-keyring debian-archive-keyring --assume-yes
sudo apt-key update
sudo apt-get update
sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 40976EAF437D05B5
sudo apt update
sudo apt install openjdk-11-jre-headless --assume-yes
sudo java -version
curl -fsSL https://pkg.jenkins.io/debian-stable/jenkins.io-2023.key | sudo tee /usr/share/keyrings/jenkins-keyring.asc > /dev/null
echo deb [signed-by=/usr/share/keyrings/jenkins-keyring.asc] https://pkg.jenkins.io/debian-stable binary/ | sudo tee /etc/apt/sources.list.d/jenkins.list > /dev/null
sudo apt-get update
sudo apt-get install jenkins --assume-yes
# sudo service jenkins status
echo 'Jenkins successfully installer'

# Install Terraform
sudo apt-get update && sudo apt-get install -y gnupg software-properties-common
wget -O- https://apt.releases.hashicorp.com/gpg | \
gpg --dearmor | \
sudo tee /usr/share/keyrings/hashicorp-archive-keyring.gpg > /dev/null
gpg --no-default-keyring \
--keyring /usr/share/keyrings/hashicorp-archive-keyring.gpg \
--fingerprint
gpg --no-default-keyring \
--keyring /usr/share/keyrings/hashicorp-archive-keyring.gpg \
--fingerprint
echo "deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] \
https://apt.releases.hashicorp.com $(lsb_release -cs) main" | \
sudo tee /etc/apt/sources.list.d/hashicorp.list

sudo apt update
sudo apt-get install terraform

# Install AWS CLI

curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install

(Optionally) You can also add this installer in your Ansible playbook while provisioning aws ec2 to automate the all process

# (Optional ) add this file in your ansible playbook to install jenkins
- name: Copy installer.sh to EC2 instance
copy:
src: installer.sh
dest: /tmp/installer.sh
mode: 0755

- name: Run installer.sh on EC2 instance
shell: /tmp/installer.sh
args:
executable: /bin/bash
with_items: "{{ ec2.instances }}"
become: true
become_user: ec2-user

6.1 Finalize your Jenkins installation

Get your initial Jenkins password as follow:

Getting started with jenkins
Jenkins Default plugins installation Progress

6.2 Install Jenkins plugins

Installing plugins in Jenkins is a straightforward process through the Jenkins web interface. Here’s how you can do it:

  1. Access Jenkins Dashboard:

2. Navigate to Plugin Manager:

  • Once you’re logged into Jenkins, click on “Manage Jenkins” from the left-hand sidebar.

3. Access Plugin Manager:

  • In the “Manage Jenkins” page, you’ll see various options. Click on “Manage Plugins.”

4. Available Plugins Tab:

  • In the “Manage Plugins” section, you’ll see different tabs. Click on the “Available” tab.

5. Search for Plugins:

  • You can search for plugins by typing keywords into the search box. Alternatively, you can browse through the list of available plugins.

6. Select Plugins for Installation:

  • Check the checkbox next to the plugin(s) you want to install.

7. Installation Process:

  • Once you’ve selected the plugins you want, scroll down to the bottom of the page and click the “Install without restart” button (or “Download now and install after restart” if you prefer to install them later).

Wait for Installation to Complete:

  • Jenkins will start downloading and installing the selected plugins. This process may take some time depending on the number and size of the plugins.

8. Confirmation:

  • Once the installation is complete, you’ll see a confirmation message indicating that the plugins were successfully installed.

9 . Restart Jenkins (if necessary):

  • If you choose the “Download now and install after restart” option, you’ll need to manually restart Jenkins for the changes to take effect. You can do this by clicking on the “Restart Jenkins when no jobs are running” checkbox at the bottom of the page.

10. Verify Installation:

  • After Jenkins restarts (if necessary), you can verify that the plugins were installed successfully by checking the “Installed” tab in the “Manage Plugins” section.

We are going to need the following plugins:

  • Terraform
  • Docker ( For Testing )
  • NodeJS ( Optional )
  • AWS EC2 ( Later )
  • AWS Credentials

You can now start using the new features provided by the installed plugins in your Jenkins pipelines and configurations.

6.3 Add AWS Credentials

How to add GitHub, Docker, or AWS credentials into Jenkins, click on “Jenkins” to access global credentials or a specific domain to limit the scope. — Choose “Add Credentials” to create a new set of credentials. — Select the appropriate credential type, such as “Username with password” or “SSH Username with private key,” depending on your Git authentication method.

AWS CREDENTIALS ADDED TO JENKINS

7. Setting up a CI/CD pipeline using Terraform

Terraform workflow

Setting up a CI/CD pipeline using Terraform to deploy a React-based single-page application to Amazon S3 and CloudFront with a custom domain name involves several steps. Here’s a high-level overview of the process:

  1. Create an S3 Bucket: Set up an S3 bucket to host your static website assets.
  2. Configure CloudFront Distribution: Create a CloudFront distribution to serve your content globally with low latency.
  3. Set Up Custom Domain: Configure a custom domain name for your application.
  4. CI/CD Pipeline Configuration: Use a CI/CD tool like Jenkins, GitLab CI/CD, or GitHub Actions to automate the deployment process using Terraform.
  5. Terraform Configuration: Write Terraform scripts to define and provision the infrastructure resources required for deployment.

Here’s a detailed guide on how to accomplish each step:

7.1. Variables definitions

# S3 bucket name
variable "bucket-name" {
default = "socialcloudsync.com"
}

# Domain name that you have registered
variable "domain-name" {
default = "socialcloudsync.com" // Modify as per your domain name
}

7.2. provider.tf

A Terraform provider is a plugin responsible for understanding API interactions with a particular infrastructure service. Providers can manage resources, execute operations, and handle authentication and communication with the underlying infrastructure.

# provider
provider "aws" {
region = "us-east-1"
alias = "use_default_region"
}

7.3. Create an S3 Bucket for static content

Creating S3 bucket and applying force destroy So, when going to destroy it won’t throw the error ‘Bucket is not empty’

# Creating S3 bucket and apply force destroy So, when going to destroy it won't throw error 'Bucket is not empty'
resource "aws_s3_bucket" "s3-bucket" {
bucket = var.bucket-name
force_destroy = true
lifecycle {
prevent_destroy = false
}
}

# Using null resource to push all the files in one time instead of sending one by one
resource "null_resource" "upload-to-S3" {
provisioner "local-exec" {
command = "aws s3 sync ${path.module}/react_app s3://${aws_s3_bucket.s3-bucket.id}"
}
}
resource "null_resource" "upload-to-S3-2" {
provisioner "local-exec" {
command = "aws s3 sync ${path.module}/react_app s3://${aws_s3_bucket.s3-bucket.id}"
}
}

# Keeping S3 bucket private
resource "aws_s3_bucket_public_access_block" "webiste_bucket_access" {
bucket = aws_s3_bucket.s3-bucket.id
block_public_acls = false
block_public_policy = false
ignore_public_acls = false
restrict_public_buckets = false
}
# This Terraform code defines an IAM policy document that allows CloudFront to access objects in the S3 bucket
data "aws_iam_policy_document" "website_bucket" {
statement {
actions = ["s3:GetObject"]
effect = "Allow"
resources = ["${aws_s3_bucket.s3-bucket.arn}/*"]

principals {
type = "*"
identifiers = ["*"]
}
# condition {
# test = "StringEquals"
# variable = "aws:SourceArn"
# values = [aws_cloudfront_distribution.cdn_static_website.arn]
# }
}
}
# Creating the S3 policy and applying it for the S3 bucket
resource "aws_s3_bucket_policy" "website_bucket_policy" {
bucket = aws_s3_bucket.s3-bucket.id
policy = data.aws_iam_policy_document.website_bucket.json
}

Our react application in the react_app_production folder will be uploaded using the aws cli s3 command, the folder contains index.html and other react build output files and folders.

7.4. backend.tf

7.4.1 S3 bucket for terraform state management

Just verify first that the bucket where you are going to save the terraform state was already created.

terraform {
backend "s3" {
bucket = "website-app-route53"
region = "us-east-1"
key = "state/terraform.tfstate"
encrypt = true
}
}

Or, you have another option to keep your state and runs at the same place using Terraform Cloud.

7.4.2 Terraform Cloud Configuration

terraform {
backend "remote" {
hostname = "app.terraform.io"
organization = "prodxcloud"
workspaces {
name = "prodxcloud"
}
}
}

HashiCorp provides GitHub Actions that integrate with the Terraform Cloud API. These actions let you create your own custom CI/CD workflows to meet the needs of your organization.

Step 1: Create your project and workplace in terraform cloud

Create a project in Terraform Cloud

Step 2: Define Variables set to allow terraform cloud for state management

Step 3 : Change the default execution Mode to remote

Step 4: Create API tokens for Github actions to interact with Terraform Cloud

Terraform cloud API tokens
API Token for your project
Generated Token
Organization token ( Optional )

7.5. Provision AWS Certificate Manager and Validate

The Domain Name System (DNS) is a directory service for resources that are connected to a network. Your DNS provider maintains a database containing records that define your domain. When you choose DNS validation instead of email validation, ACM provides you with one or more CNAME records that must be added to this database. These records contain a unique key-value pair that serves as proof that you control the domain.

# ACM certificate resource with the domain name and DNS validation method, supporting subject alternative names
resource "aws_acm_certificate" "cert" {
provider = aws.use_default_region
domain_name = var.domain-name
validation_method = "DNS" # EMAIL is our preference
subject_alternative_names = [var.domain-name]
# wait_for_validation = false
lifecycle {
create_before_destroy = true
}
}
#ACM certificate validation resource using the certificate ARN and a list of validation record FQDNs.
resource "aws_acm_certificate_validation" "cert" {
provider = aws.use_default_region
certificate_arn = aws_acm_certificate.cert.arn
validation_record_fqdns = [for record in aws_route53_record.cert_validation : record.fqdn]
}

7.6. Configure CloudFront Distribution

CF distributions provide an efficient way of delivering key content to end users all over the world by using a global network of edge locations. An edge location is a geographical site where CloudFront caches copies of commonly downloaded objects such as web pages, images, media files, etc.

# CloudFront distribution with S3 origin, HTTPS redirect, IPv6 enabled, no cache, and ACM SSL certificate.
resource "aws_cloudfront_distribution" "cdn_static_website" {
enabled = true
is_ipv6_enabled = true
default_root_object = "index.html"
origin {
domain_name = aws_s3_bucket.s3-bucket.bucket_regional_domain_name
origin_id = "my-s3-origin"
# origin_access_control_id = aws_cloudfront_origin_access_control.default.id
}
default_cache_behavior {
min_ttl = 0
default_ttl = 0
max_ttl = 0
viewer_protocol_policy = "redirect-to-https"
allowed_methods = ["GET", "HEAD", "OPTIONS"]
cached_methods = ["GET", "HEAD"]
target_origin_id = "my-s3-origin"
forwarded_values {
query_string = false
cookies {
forward = "none"
}
}
}
restrictions {
geo_restriction {
locations = []
restriction_type = "none"
}
}
viewer_certificate {
# acm_certificate_arn = ""
acm_certificate_arn = aws_acm_certificate.cert.arn
ssl_support_method = "sni-only"
minimum_protocol_version = "TLSv1.2_2021"
}

}
# CloudFront origin access control for S3 origin type with always signing using sigv4 protocol
resource "aws_cloudfront_origin_access_control" "default" {
name = "cloudfront OAC"
description = "description OAC"
origin_access_control_origin_type = "s3"
signing_behavior = "always"
signing_protocol = "sigv4"
}
# Output the CloudFront distribution URL using the domain name of the cdn_static_website resource.
output "cloudfront_url" {
value = aws_cloudfront_distribution.cdn_static_website.domain_name
}

7.7. Route53

# AWS Route53 zone data source with the domain name and private zone set to false
data "aws_route53_zone" "zone" {
provider = aws.use_default_region
name = var.domain-name
private_zone = false
}

# AWS Route53 record resource for certificate validation with dynamic for_each loop and properties for name, records, type, zone_id, and ttl.
resource "aws_route53_record" "cert_validation" {
provider = aws.use_default_region
for_each = {
for dvo in aws_acm_certificate.cert.domain_validation_options : dvo.domain_name => {
name = dvo.resource_record_name
record = dvo.resource_record_value
type = dvo.resource_record_type
}
}

allow_overwrite = true
name = each.value.name
records = [each.value.record]
type = each.value.type
zone_id = data.aws_route53_zone.zone.zone_id
ttl = 60
}

# AWS Route53 record resource for the "www" subdomain. The record uses an "A" type record and an alias to the AWS CloudFront distribution with the specified domain name and hosted zone ID. The target health evaluation is set to false.
resource "aws_route53_record" "www" {
zone_id = data.aws_route53_zone.zone.id
name = "${var.domain-name}"
type = "A"

alias {
name = aws_cloudfront_distribution.cdn_static_website.domain_name
zone_id = aws_cloudfront_distribution.cdn_static_website.hosted_zone_id
evaluate_target_health = false
}
}

7.8. Setup Bucket Policy:

A bucket policy is a resource-based policy option. It allows you to grant more granular access to Object Storage resources.

CloudFront distribution from an S3 bucket

{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::socialcloudsync.com/*",
"Condition": {
"StringEquals": {
"aws:SourceArn": "arn:aws:cloudfront::xxx:distribution/E2SZ49FXEKL75W"
}
}
}
]
}

8. Create a new Jenkins Job

Jenkins Jobs are a given set of tasks that run sequentially as defined by the user. Any automation implemented in Jenkins is a Jenkins Job. These jobs are a significant part of Jenkins’s build process. We can create and build Jenkins jobs to test our application or project. In this tutorial we are going to explore many scenarios how to run Jenkins build, we will also establish a webhook connection between our Jenkins server with aws ec2 to listen to every push event from developers

Below is a step-by-step process to create a job in Jenkin.

  1. Login to Jenkins. …
  2. Create a New Item. …
  3. Enter Item details. …
  4. Enter Project details. …
  5. Enter the repository URL. …
  6. Tweak the settings. …
  7. Save the project. …
  8. Build Source code.

Next, we are going add a new webhook in GitHub, to establish a webhook connection between our Jenkins server with aws ec2 to listen to every push event from developers.

In your GitHub repository, go to “Settings” > “Webhooks” > “Add webhook.” Enter the Jenkins webhook URL (usually in the format http://jenkins-server/github-webhook/) and select the events that should trigger the webhook (e.g., push events).

Settings up webhook Step 1

Settings up webhook Step 2

Here is the template Jenkinsfile that you can customize as we progress

pipeline {
agent any
options {
buildDiscarder(logRotator(numToKeepStr: '3'))
}
environment {
DOMAIN_NAME = ""
PUBLIC_S3_BUCKET = ""
DOCKERHUB_CREDENTIALS = credentials('globaldockerhub')
appName = "server"
registry = ""
registryCredential = ""
projectPath = ""
AWS_ACCOUNT=credentials('AWS_ACCOUNT')
AWS_ACCESS_KEY_ID=credentials('AWS_ACCESS_KEY_ID')
AWS_SECRET_ACCESS_KEY=credentials('AWS_SECRET_ACCESS_KEY')
AWS_REGION=credentials('AWS_REGION')
AWS_EC2_INSTANCE=credentials('AWS_EC2_INSTANCE') # 34.238.119.22
AWS_SSH_KEY = credentials('AWS_SSH_KEY')
}

stages {
stage('Checkout') {
steps {
git branch: 'main', credentialsId: 'github-credentials', url: 'https://github.com/joelwembo/prodx-reactwebui-react-demo-1.git'
}
}

stage('Install Dependencies') {
steps {
sh 'npm install'
}
}

stage('Run Mocha Tests') {
steps {
sh 'npm run test' // Assuming your tests are run with 'npm test'
}
}

stage('Build React App') {
steps {
sh 'npm run build' // Build the React application only if tests pass (conditional stage success)
}
}

stage('Provision AWS Infrastructure (Terraform)') {
steps {
sh 'terraform init' // Initialize Terraform
sh 'terraform plan' // Validate Terraform configuration
input 'Confirm?' message: 'Are you sure you want to deploy to AWS?'
sh 'terraform apply -auto-approve' // Apply Terraform configuration (requires confirmation)
}
}
stage('Deploy to CloudFront') {
steps {
// Upload built React app to S3 bucket (replace with your upload script)
sh 'aws s3 cp build/ s3://your-bucket-name/ --recursive --profile default'
// Check if Cloudfront was created && Invalidate CloudFront cache to ensure latest content is served (replace with your invalidation script)
sh 'aws cloudfront create-invalidation --distribution-id your-distribution-id --paths "/*" --profile default'
}
}
}

post {
always {
cleanWs() // Clean workspace after pipeline execution
}
success {
// Optional: Send notification on successful builds (e.g., email)
}
failure {
// Optional: Send notification on failed builds (e.g., email)
}
}
}

9. Check the result in your AWS Management Console

S3 Bucket Static files
ACM SSL Certificate
Route53 Hosted Zone Record Type A Created
Website Under CloudFront

Summary

Automating React App Deployment to AWS with Jenkins and Terraform approach leverages Jenkins and Terraform to establish a CI/CD pipeline for deploying React applications to AWS infrastructure.

Key Players:

  • Jenkins: Acts as the CI/CD orchestration tool, automating the build, test, and deployment process triggered by code changes.
  • Terraform: Manages and provisions the AWS infrastructure needed for deployment, including CloudFront for content delivery, ACM for SSL certificates, S3 for static content hosting, and Route53 for domain name management.

Benefits:

  • Faster Deployments: Automating the pipeline reduces manual work and deployment time.
  • Improved Reliability: Consistent and automated deployments minimize human error.
  • Scalability: The pipeline can handle frequent deployments with ease.
  • Infrastructure as Code: Terraform ensures repeatable and manageable infrastructure provisioning.

Pipeline Stages:

  1. Code Commit: Upon committing changes to a version control system (e.g., Git), Jenkins is triggered.
  2. Build and Test: Jenkins retrieves the code, builds the React application, and executes automated tests.
  3. Infrastructure Provisioning: If tests pass, Terraform scripts are executed to create or update the AWS resources (CloudFront, ACM, S3, Route53) based on the desired configuration.
  4. Deployment: The built React application is uploaded to the S3 bucket, and CloudFront is configured to serve content from S3 with an SSL certificate from ACM. Route53 ensures the domain name points to the CloudFront distribution.

In conclusion, combining Jenkins and Terraform offers a powerful solution for deploying React applications to AWS. Jenkins automates the CI/CD pipeline, triggering builds, tests, and deployments based on code changes. Terraform manages the infrastructure provisioning on AWS, ensuring consistent and repeatable creation of CloudFront, ACM, S3, and Route53 resources. This combined approach streamlines the development workflow, leading to faster deployments, improved reliability, and easier infrastructure management. If you’re looking for an automated and scalable solution for deploying React applications to AWS, this approach is worth considering.

You can also find the codes on GitHub here.

Thank you for Reading !! 🙌🏻, don’t forget to subscribe and give it a CLAP 👏see you in the next article.🤘

About me

I am Joel O’Wembo, AWS certified cloud architect, Back-end developer, and AWS Community Builder, I‘m based in the Philippines 🇵🇭 I bring a powerful combination of expertise in cloud architecture, DevOps practices, and a deep understanding of high availability (HA) principles. I leverage my knowledge to create robust, scalable cloud applications using open-source tools for efficient enterprise deployments.”

For more information about the author ( Joel O. Wembo ) visit:

Links:

References

--

I am a Cloud Solutions Architect, I provide IT solutions using AWS, AWS CDK, Kubernetes, Serverless and Terraform. https://www.linkedin.com/in/joelotepawembo