A step-by-step guide for AWS EC2 provisioning using Terraform: Deploying React with NGINX to EC2 using GitHub Actions (end-to-end CI/CD pipeline ) — Part 10

Joel Wembo
12 min readJun 23, 2024

--

step-by-step guide to provision an AWS EC2 instance using Terraform, deploy a React application with NGINX, and automate the process using GitHub Actions for an end-to-end CI/CD pipeline. End-to-end pipeline tutorial ( EC2, Github Actions, SonarCloud, nginx, wordpress, terraform and Github Actions

nginx

Table of contents

Step 1: EC2 instance and Key Pair ( PEM file )
Step 2: check nginx installation
Step 3: Add a file named deploy.yaml in .github/workflows
· Conclusion
· About me
· Discussion
· References

Prerequisites

  1. AWS Account: Ensure you have an AWS account with access keys.
  2. Terraform: Installed on your local machine.
  3. GitHub Account: Repository for the React app and Terraform scripts.
  4. React Application: A basic React application created using create-react-app

To enhance readability, this handbook is divided into chapters and split into parts. The first, part, “A step-by-step guide for AWS EC2 provisioning using Terraform: HA, ALB, VPC, and Route53 — Part 1”, and the second part “A step-by-step guide for AWS EC2 provisioning using Terraform: HA, CloudFront, WAF, and SSL Certificate — Part 2”, and “A step-by-step guide for AWS EC2 provisioning using Terraform: Cloud Cost Optimization, AWS EC2 Spot Instances — Part 3”, was covered in a separate article to keep the reading time manageable and ensure focused content. The next part or chapter will be published in the next post, upcoming in a few days, “A step-by-step guide for AWS EC2 provisioning using Terraform: VPC peering, VPN, Site-to-site Connection, tunnels ( multi-Cloud ) — Part 12“ and so much more !!

step-by-step guide to provision an AWS EC2 instance using Terraform, deploy a React application with NGINX, and automate the process using GitHub Actions for an end-to-end CI/CD pipeline. End-to-end pipeline tutorial ( EC2, GitHub Actions, SonarCloud, nginx, WordPress, terraform and GitHub Actions

At this stage, you already provision and have your ec2 instance using Terraform or AWS CDK, if not check it on part 1.

# Step 4 
# Terraform provision AWS EC2 instance with S3 State Management
# Fetch the latest Ubuntu AMI
data "aws_ami" "ubuntu" {
most_recent = true
# owners = ["059978233428"] # Canonical's AWS account ID
owners = ["amazon"]


filter {
name = "name"
values = ["ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-*"]
}

filter {
name = "virtualization-type"
values = ["hvm"]
}
}

# Fetch Amazon Lunix 2 AMI
data "aws_ami" "amazon-linux-2" {
most_recent = true


filter {
name = "owner-alias"
values = ["amazon"]
}


filter {
name = "name"
values = ["amzn2-ami-hvm*"]
}
}
# AWS EC2 Instance A ( Ubuntu 20.04 lTS )
resource "aws_instance" "prodxcloud-lab-1" {
# ami = var.instance_ami # ami id from variable.tf
ami = data.aws_ami.ubuntu.id
instance_type = var.instance_type
# subnet_id = var.instance_subnet_id # Custom using subnet id using variable.tf
subnet_id = element(aws_subnet.prodxcloud_public_subnets.*.id, 1) # dynamic via terraform vpc.tf
associate_public_ip_address = var.publicip
key_name = var.instance_keyName
monitoring = true # Enable detailed monitoring

# Remote Provisioner execution using bash scipt file
# Establishes connection to be used by all

provisioner "file" {
source = "scripts/user_data.sh"
destination = "/tmp/user_data.sh"

# SSH Connection via terraform
connection {
type = "ssh"
user = "ubuntu"
host = self.public_ip
private_key = file("${path.module}/prodxcloud-ec2-keypair-1.pem")
}

}

# Remote Provisioner for User-Data inline commands
provisioner "remote-exec" {
# inline = [
# "sudo apt-get update",
# "sudo apt-get install -y nginx",
# "sudo systemctl start nginx",
# "sudo systemctl enable nginx",
# "sudo chmod -R 777 /var/www/html",
# "sudo echo “User Data Installed by Terraform $(hostname -f)” >> /var/www/html/index.html"
# ]
# generic remote provisioners (i.e. file/remote-exec) with file
inline = [
"chmod +x /tmp/user_data.sh",
"/tmp/user_data.sh",
]
# SSH connection via terraform
connection {
type = "ssh"
user = "ubuntu"
host = self.public_ip
private_key = file("${path.module}/prodxcloud-ec2-keypair-1.pem")

}
}
# remote exec end here

# Attaching security group
vpc_security_group_ids = [
aws_security_group.prodxcloud-SG.id
]
root_block_device {
delete_on_termination = false
volume_size = 50
volume_type = "gp2"
}
tags = {
Name = "prodxcloud-lab-1"
Environment = "DEV"
OS = "UBUNTU"
Managed = "PRODXCLOUD"
}

depends_on = [aws_security_group.prodxcloud-SG, aws_vpc.prodxcloud-vpc, aws_subnet.prodxcloud_public_subnets]

//end
}


# # AWS EC2 Instance B ( Amazon Lunix 2 )
resource "aws_instance" "prodxcloud-lab-2" {
# ami = var.instance_ami # ami id from variable.tf
ami = data.aws_ami.amazon-linux-2.id
instance_type = var.instance_type
# subnet_id = var.instance_subnet_id # Custom using subnet id using variable.tf
subnet_id = element(aws_subnet.prodxcloud_public_subnets.*.id, 2) # dynamic via terraform vpc.tf
associate_public_ip_address = var.publicip
key_name = var.instance_keyName
# monitoring = true # Enable detailed monitoring

# Remote Provisioner execution using bash scipt file
# Establishes connection to be used by all

# provisioner "file" {
# source = "scripts/nodejs-installer-amazon-lunix.sh"
# destination = "/tmp/nodejs-installer-amazon-lunix.sh"

# # SSH Connection via terraform
# connection {
# type = "ssh"
# user = "ec2-ubuntu"
# host = self.public_ip
# private_key = file("${path.module}/prodxcloud-ec2-keypair-1.pem")
# }

# }

# Remote Provisioner for User-Data inline commands
# provisioner "remote-exec" {
# # generic remote provisioners (i.e. file/remote-exec) with file
# inline = [
# "chmod +x /tmp/nodejs-installer-amazon-lunix.sh",
# "/tmp/nodejs-installer-amazon-lunix.sh",
# ]
# # SSH connection via terraform
# connection {
# type = "ssh"
# user = "ec2-user"
# host = self.public_ip
# private_key = file("${path.module}/prodxcloud-ec2-keypair-1.pem")

# }
# }
# remote exec end here

# Attaching security group
vpc_security_group_ids = [
aws_security_group.prodxcloud-SG.id
]
root_block_device {
delete_on_termination = false
volume_size = 30
volume_type = "gp2"
}
tags = {
Name = "prodxcloud-lab-2"
Environment = "PROD"
OS = "AMAZON"
Managed = "PRODXCLOUD"
}

lifecycle {
create_before_destroy = "true"
ignore_changes = [
ami,
instance_type,
]
}

depends_on = [aws_security_group.prodxcloud-SG, aws_subnet.prodxcloud_public_subnets]



}

We are going to use Amazon Linux 2 for this demo, so

Step 1: EC2 instance and Key Pair ( PEM file )

Here is the terraform solution to create both the ec2 instance and to install the nginx server. Note: You must have your key pair file and instance ami_id


# AWS EC2 Instance B ( Amazon Lunix 2 )
resource "aws_instance" "prodxcloud-lab-2" {
# ami = var.instance_ami # ami id from variable.tf
ami = "ami-08a0d1e16fc3f61ea"
instance_type = var.instance_type
# subnet_id = var.instance_subnet_id # Custom using subnet id using variable.tf
subnet_id = element(aws_subnet.prodxcloud_public_subnets.*.id, 2) # dynamic via terraform vpc.tf
associate_public_ip_address = var.publicip
key_name = var.instance_keyName
# monitoring = true # Enable detailed monitoring

# Remote Provisioner execution using bash scipt file
# Establishes connection to be used by all

provisioner "file" {
source = "scripts/nodejs-installer-amazon-lunix.sh"
destination = "/tmp/nodejs-installer-amazon-lunix.sh"

# SSH Connection via terraform
connection {
type = "ssh"
user = "ubuntu"
host = self.public_ip
private_key = file("${path.module}/prodxcloud-ec2-keypair-1.pem")
}

}

# Remote Provisioner for User-Data inline commands
provisioner "remote-exec" {
# generic remote provisioners (i.e. file/remote-exec) with file
inline = [
"chmod +x /tmp/nodejs-installer-amazon-lunix.sh",
"/tmp/nodejs-installer-amazon-lunix.sh",
]
# SSH connection via terraform
connection {
type = "ssh"
user = "ubuntu"
host = self.public_ip
private_key = file("${path.module}/prodxcloud-ec2-keypair-1.pem")

}
}
# remote exec end here

# Attaching security group
vpc_security_group_ids = [
aws_security_group.prodxcloud-SG.id
]
root_block_device {
delete_on_termination = false
volume_size = 50
volume_type = "gp2"
}
tags = {
Name = "prodxcloud-lab-2"
Environment = "DEV"
OS = "UBUNTU"
Managed = "PRODXCLOUD"
}

depends_on = [aws_security_group.prodxcloud-SG, aws_vpc.prodxcloud-vpc, aws_subnet.prodxcloud_public_subnets]

//end
}

terraform init
terraform plan
terraform apply

Next, Edit your environment variable for your GitHub Actions to detect your new IP, user, and private key

Note: Properly configure your Security group for inbound traffic, allowing SSH and also port 80

Step 2: check nginx installation

We did install nginx in ec2 using terraform by using module provisional file remote execution, so check with Step 1

# # Install the Extra Packages for Enterprise Linux (EPEL) repository
sudo amazon-linux-extras install epel -y

# Install Nginx
sudo yum install -y nginx

# Start Nginx service
sudo systemctl start nginx

# Enable Nginx to start on boot
sudo systemctl enable nginx

# Print Nginx status
sudo systemctl status nginx

Step 3: Add a file named deploy.yaml in .github/workflows

Create a workflow file in the .github/workflows directory deploy.yaml for your pipeline workflow steps

name: prodxcloud.io Build and Deploy

on:
push:
branches:
- prod
- main

env:
ENVIRONMENT: ${{ github.ref_name }}
REMOTE_HOST: ${{ secrets.REMOTE_HOST }}
REMOTE_USER: ${{ secrets.REMOTE_USER }}
PRIVATE_KEY: ${{ secrets.PRIVATE_KEY }}
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_DEFAULT_REGION: "us-east-1"

jobs:
build:
name: build
runs-on: ubuntu-latest
strategy:
matrix:
node-version: [20.x]
environment: ${{ github.ref_name }}

steps:
- name: Checkout
uses: actions/checkout@v4
- name: Use Node.js ${{ matrix.node-version }}
uses: actions/setup-node@v4
with:
node-version: ${{ matrix.node-version }}
cache: 'npm'
- run: npm install --force

- name: Build the React application
run: npm run build
- name: Prepare EC2 prod_contents Folder
env:
host: ${{ env.REMOTE_HOST }}
username: ${{ env.REMOTE_USER }}
key: ${{ env.PRIVATE_KEY}}
run: |
echo "${PRIVATE_KEY}" > PK && chmod 600 PK
ssh -o StrictHostKeyChecking=no -i "./PK" ${REMOTE_USER}@${REMOTE_HOST} '
rm -rf prod_contents && mkdir -vp prod_contents
'
- name: SCP ( COPY ) to EC2 prod_contents
env:
host: ${{ env.REMOTE_HOST }}
username: ${{ env.REMOTE_USER }}
key: ${{ env.PRIVATE_KEY}}
run: |
scp -o StrictHostKeyChecking=no -i "./PK" -r ./dist ${REMOTE_USER}@${REMOTE_HOST}:~/prod_contents


deploy:
name: Copy build directory and relese to production
needs: build
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Use Node.js ${{ matrix.node-version }}
uses: actions/setup-node@v4
with:
node-version: ${{ matrix.node-version }}
cache: 'npm'

- name: Copy prod_contents to nginx public directory
env:
host: ${{ env.REMOTE_HOST }}
username: ${{ env.REMOTE_USER }}
key: ${{ env.PRIVATE_KEY}}
run: |
echo "${PRIVATE_KEY}" > PK && chmod 600 PK
ssh -o StrictHostKeyChecking=no -i "./PK" ${REMOTE_USER}@${REMOTE_HOST} '
cd prod_contents/dist && sudo cp -rf * /usr/share/nginx/html/
'
git push

You’ve successfully set up an end-to-end CI/CD pipeline using Terraform to provision an AWS EC2 instance, deploy a React application with NGINX, and automate the process with GitHub Actions. This setup ensures that every push to the main branch results in a deployment to the EC2 instance, providing a seamless and automated workflow.

You can also automate your terraform infrastructure creation process using GitHub Actions with a different workflow file :

name: "Enterprise Terraform Pipeline Provision EC2 Route 53 & CloudFront"

on:
push:
branches: ['master', 'main']
pull_request:
branches: ['master', 'main']

env:
# verbosity setting for Terraform log
TF_LOG: INFO
# Credentials for deployment to AWS
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_REGION: "us-east-1"
# S3 bucket for the Terraform state
# BUCKET_TF_STATE: ${{ secrets.BUCKET_TF_STATE}}
TF_CLOUD_ORGANIZATION: "prodxcloud"
TF_API_TOKEN: ${{ secrets.TF_API_TOKEN}}
TF_WORKSPACE: "prodxcloud"
CONFIG_DIRECTORY: "./terraform/aws/terraform-aws-ec2-github-actions-tfcloud/terraform"
# CONFIG_DIRECTORY: "./"


jobs:
terraform:
name: "Terraform Pipeline Provision EC2 with Terraform Cloud"
runs-on: ubuntu-latest
defaults:
run:
shell: bash
# We keep Terraform files in the terraform directory.
working-directory: ./terraform/aws/terraform-aws-ec2-github-actions-tfcloud/terraform

steps:
- name: Checkout the repository to the runner
uses: actions/checkout@v2

- name: Set up AWS credentials
uses: aws-actions/configure-aws-credentials@v4
with:
aws-access-key-id: ${{ env.AWS_ACCESS_KEY }}
aws-secret-access-key: ${{ env.AWS_SECRET_KEY }}
aws-region: ${{ env.AWS_REGION }}

- name: Setup Terraform with specified version on the runner
uses: hashicorp/setup-terraform@v3
with:
terraform_version: 1.7.5

- name: Terraform init
id: init
run: terraform init -lock=false

- name: Terraform format
id: fmt
run: terraform fmt

- name: Terraform Apply
run: terraform destroy -auto-approve -input=false -lock=false
env:
aws-access-key-id: ${{ env.AWS_ACCESS_KEY }}
aws-secret-access-key: ${{ env.AWS_SECRET_KEY }}
aws-region: ${{ env.AWS_REGION }}

Enterprise Terraform Pipeline Provision EC2 Route 53 & CloudFront by Joel Wembo
Enterprise Terraform Pipeline Provision EC2 Route 53 & CloudFront

*** Quickstart guide: How to deploy a full-stack cloud-native app with Secure HTTPS to CloudFront, API Gateway, and Route53 with an external Custom Domain registrar ***

Update: Once you are done with this tutorial, you might to check up a follow-up tutorial on the next part, A step-by-step guide for AWS EC2 provisioning using Terraform: HA, CloudFront, WAF, and SSL Certificate — Part 2

Conclusion

This guide has walked you through the process of setting up an AWS environment using Terraform, focusing on high availability, load balancing, and secure networking. By leveraging Terraform’s Infrastructure as Code capabilities, you can automate and streamline your infrastructure management, ensuring consistency, reducing errors, and saving time.

By integrating these strategies into your AWS environment, you can enhance cost-efficiency, ensure reliability, and maintain business continuity. Terraform, combined with these AWS best practices, empowers you to build a scalable, resilient, and optimized cloud infrastructure that meets the demands of your business.

To enhance readability, this handbook is divided into chapters and split into parts. The first, part, “A step-by-step guide for AWS EC2 provisioning using Terraform: HA, ALB, VPC, and Route53 — Part 1”, and the second part “A step-by-step guide for AWS EC2 provisioning using Terraform: HA, CloudFront, WAF, and SSL Certificate — Part 2”, and “A step-by-step guide for AWS EC2 provisioning using Terraform: Cloud Cost Optimization, AWS EC2 Spot Instances — Part 3”, was covered in a separate article to keep the reading time manageable and ensure focused content. The next part or chapter will be published in the next post, upcoming in a few days, “A step-by-step guide for AWS EC2 provisioning using Terraform: VPC peering, VPN, Site-to-site Connection, tunnels ( multi-Cloud ) — Part 12“ and so much more !!

Thank you for Reading !! 🙌🏻, don’t forget to subscribe and give it a CLAP 👏, and if you found this article useful contact me or feel free to sponsor me to produce more public content. see me in the next article.🤘

About me

I am Joel Wembo, AWS certified cloud Solutions architect, Back-end developer, and AWS Community Builder, I‘m based in the Philippines 🇵🇭; and currently working at prodxcloud as a DevOps & Cloud Architect. I bring a powerful combination of expertise in cloud architecture, DevOps practices, and a deep understanding of high availability (HA) principles. I leverage my knowledge to create robust, scalable cloud applications using open-source tools for efficient enterprise deployments.

I’m looking to collaborate on AWS CDK, AWS SAM, DevOps CI/CD, Serverless Framework, CloudFormation, Terraform, Kubernetes, TypeScript, GitHub Actions, PostgreSQL, and Django.”

For more information about the author ( Joel O. Wembo ) visit:

Links:

Discussion

Does the guide detail alternative infrastructure provisioning options besides using an EC2 instance for Jenkins? Could a managed Jenkins service be used instead?

Yes, a managed service could have been used as well. In fact, we only needed EC2 provisioning for the development environment, the Jenkins server can be hosted anywhere like Microsoft Azure VM

How does the guide handle managing Terraform state for infrastructure definitions? Are there best practices for state management in a CI/CD pipeline?

Yes, In this guide we have provided 2 options to handle the terraform state, one using AWS S3 bucket, and the second option is, terraform cloud.

Does the guide cover advanced deployment strategies like blue/green deployments or canary deployments for rolling out new versions of the Django application?

Yes, can check on Part 2, and this multi-stage implementation

References

--

--

Joel Wembo

I am a Cloud Solutions Architect at prodxcloud. Expert in AWS, AWS CDK, EKS, Serverless Computing and Terraform. https://www.linkedin.com/in/joelotepawembo