TASK-2

Nisha shukla
6 min readOct 13, 2020

Perform the task-1 using EFS instead of EBS service on the AWS as

Create/launch Application using Terraform

1. Create a Security group that allows the port 80.

2. Launch EC2 instance.

3. In this Ec2 instance use the existing key or provided key and security group which we have created in step 1.

4. Launch one Volume using the EFS service and attach it in your VPC, then mount that volume into /var/www/html

5. Developer has uploaded the code into GitHub repo also the repo has some images.

6. Copy the GitHub repo code into /var/www/html

7. Create an S3 bucket, and copy/deploy the images from GitHub repo into the s3 bucket and change the permission to public readable.

8 Create a Cloudfront using s3 bucket(which contains images) and use the Cloudfront URL to update in code in /var/www/html

STEP 1:- first to Configure amazon web service on the Command line.

  • First open VS Code and then Configure AWS on it and provide access key and secret key which already downloaded at the time of creating IAM user. for more information check my task1. Also provide region name as ap-south-1 and Default output format as json.

STEP 2:- CREATE TERRAFORM CODE

  • CREATE A NEW FILE AND NAME IT AS taskcloud2.tf THAT CONTAIN PROVIDER NAME HERE .tf IS A TERRAFORM EXTENSION.

provider “aws” {
region = “ap-south-1”
profile = “Task2”
}

  • Creating the Security group for instance so our clients can access from other devices as the AWS has some default security setting for not allowing to connect from outside the host so there is a firewall which protects from outside for connecting we need to configure the TCP settings which Allow connecting to ports for SSH and HTTP port number 80

# — Creating Security Groups

resource “aws_security_group” “sg” {
name = “task2-sg”
description = “Allow TLS inbound traffic”
vpc_id = “vpc-0e8fa522ca40ce0ac”

ingress {
description = “SSH”
from_port = 22
to_port = 22
protocol = “tcp”
cidr_blocks = [ “0.0.0.0/0” ]
}

ingress {
description = “HTTP”
from_port = 80
to_port = 80
protocol = “tcp”
cidr_blocks = [ “0.0.0.0/0” ]
}

egress {
from_port = 0
to_port = 0
protocol = “-1”
cidr_blocks = [“0.0.0.0/0”]
}

tags = {
Name = “task2-sg”
}
}

STEP3

Launching an instance with created key pair and security group and to connect into the instance we need to specify the path of the key and public_ip of instance. And installing httpd, PHP, git to deploy a webpage.

# — Creating Ec2 instance

resource “aws_instance” “web_server” {
ami = “ami-0447a12f28fddb066”
instance_type = “t2.micro”
root_block_device {
volume_type = “gp2”
delete_on_termination = true
}
key_name = “mytask2key”
security_groups = [ “${aws_security_group.sg.name}” ]

connection {
type = “ssh”
user = “ec2-user”
private_key = file(“C:/Users/Dell/Downloads/mytask2key.pem”)
host = aws_instance.web_server.public_ip
}

provisioner “remote-exec” {
inline = [
“sudo yum install httpd git -y”,
“sudo systemctl restart httpd”,
“sudo systemctl enable httpd”,
]
}

tags = {
Name = “task2_os”
}

}

STEP4

Now we will be creating our EFS and for that, we require VPC which will contact VP at the backend but since we haven’t mentioned it so we will go for the default one. And once it gets created then we will create a mount we will clone all the required data from the Github and then we will mount our EFS to /var/www/Html directory.

# — Creating EFS volume

resource “aws_efs_file_system” “efs” {
creation_token = “efs”
performance_mode = “generalPurpose”
throughput_mode = “bursting”
encrypted = “true”
tags = {
Name = “Efs”
}
}

# — Mounting the EFS volume

resource “aws_efs_mount_target” “efs-mount” {
depends_on = [
aws_instance.web_server,
aws_security_group.sg,
aws_efs_file_system.efs,
]

file_system_id = aws_efs_file_system.efs.id
subnet_id = aws_instance.web_server.subnet_id
security_groups = [“${aws_security_group.sg.id}”]


connection {
type = “ssh”
user = “ec2-user”
private_key = file(“C:/Users/Dell/Downloads/mytask2key.pem”)
host = aws_instance.web_server.public_ip
}

provisioner “remote-exec” {
inline = [
“sudo mount ${aws_efs_file_system.efs.id}:/ /var/www/html”,
“sudo echo ‘${aws_efs_file_system.efs.id}:/ /var/www/html efs defaults,_netdev 0 0’ >> /etc/fstab”,
“sudo rm -rf /var/www/html/*”,
“sudo git clone https://github.com/kmnishashukla/task2.git /var/www/html/”
]
}
}

STEP5

Now I will create an S3 bucket and upload my image to it in the same availability zone.

# — Creating S3 Bucket

resource “aws_s3_bucket” “mybucket” {
bucket = “nisha6600”
acl = “public-read”

tags = {
Name = “nisha6600”
}
}

# — Uploading files in S3 bucket

resource “aws_s3_bucket_object” “file_upload” {
depends_on = [
aws_s3_bucket.mybucket,
]
bucket = “nisha6600”
key = “nature.jpg”
source = “C:/Users/Dell/Downloads/nature.jpg”
acl =”public-read”
}

STEP6

In the last step, we will create the cloud-front that will collect all my data from the S3 bucket and reach my client through the nearest edge locations whenever any client will hit to my site.

# — Creating CloudFront

resource “aws_cloudfront_distribution” “s3_distribution” {
depends_on = [
aws_efs_mount_target.efs-mount,
aws_s3_bucket_object.file_upload,
]

origin {
domain_name = “${aws_s3_bucket.mybucket.bucket}.s3.amazonaws.com”
origin_id = “ak”
}

enabled = true
is_ipv6_enabled = true
default_root_object = “index.html”

restrictions {
geo_restriction {
restriction_type = “none”
}
}

default_cache_behavior {
allowed_methods = [“HEAD”, “GET”]
cached_methods = [“HEAD”, “GET”]
forwarded_values {
query_string = false
cookies {
forward = “none”
}
}
default_ttl = 3600
max_ttl = 86400
min_ttl = 0
target_origin_id = “ak”
viewer_protocol_policy = “allow-all”
}

price_class = “PriceClass_All”

viewer_certificate {
cloudfront_default_certificate = true
}
}

# — Updating cloudfront_url to main lacation

resource “null_resource” “nullremote3” {
depends_on = [
aws_cloudfront_distribution.s3_distribution,
]

STEP7

Connecting to the instance and deploying image of s3 bucket to the var/www/html and then it automatically opens on the google chrome browser

connection {
type = “ssh”
user = “ec2-user”
private_key = file(“C:/Users/Dell/Downloads/mytask2key.pem”)
host = aws_instance.web_server.public_ip
}

provisioner “remote-exec” {
inline = [
“sudo su <<END”,
“echo \”<img src=’http://${aws_cloudfront_distribution.s3_distribution.domain_name}/${aws_s3_bucket_object.file_upload.key}' height=’6300' width=’1200'>\” >> /var/www/html/index.html”,
“END”,
]

}
}

# — Starting chrome for output

resource “null_resource” “nulllocal1” {
depends_on = [
null_resource.nullremote3,
]

provisioner “local-exec” {
command = “start chrome ${aws_instance.web_server.public_ip}”
}
}

HENCE TASK HAS BEEN DONE SUCCESSFULLY ………….

terraform destroy -auto-apply

THANKS FOR READING MY ARTICLE !!!

Sign up to discover human stories that deepen your understanding of the world.

Free

Distraction-free reading. No ads.

Organize your knowledge with lists and highlights.

Tell your story. Find your audience.

Membership

Read member-only stories

Support writers you read most

Earn money for your writing

Listen to audio narrations

Read offline with the Medium app

--

--

Nisha shukla
Nisha shukla

No responses yet

Write a response