Managing Complex Environment Using Terraform And Ansible’s Dynamic Inventory, Role On AWS.

Hello, automation technicians !! Back with another article. In this article, i have configured apache webserver on two instances i.e redhat and ubuntu ec2 instances. These respective ec2 instances has been launched using terraform (one of the great provisioning tool) on AWS cloud and configured apche webserver in that respective instances using Ansible (one of the great automation configuration tool). The webserver is configured using roles and dynamic inventory.

Why terraform and Ansible only for this. So look at the below graph. you will get some little info.

Ansible, Terraform, puppet, Salt, Chef Comparison

Ansible has taken a lead role in doing the configuration management in the automation world whereas terraform rising on the edge of provisioning operating systems. The graph is outdated but its tells you a truth about the today’s reality.

In this article, all the codes and concepts explained in detailed. So sit at the back with one cup of coffee and read bit by bit.

So, lets start to build this automation. First we have to install terraform from below link.

Now, you need to set the path in the environment variables. After setting the environment variables check the terraform version.

terraform version
terraform -version

you can download the latest version from the below link.

After we need to make directory and create a file with .tf extension. .tf is the extension of terraform.

provider "aws" {
region = "ap-south-1"
profile = "IAM_User_Name"
}

The terraform code will be starting by taking the user name which you have created in your AWS account. You need to give the region also in the provider block.

provier → aws (else your cloud name)

variable "cidr_subnet1" {
description = "CIDR block for the subnet"
default = "192.168.1.0/24"
}
variable "cidr_subnet2" {
description = "CIDR block for the subnet"
default = "192.168.2.0/24"
}
variable "availability_zone" {
description = "availability zone to create subnet"
default = "ap-south-1"
}

Above code will create a variable of different subnets and availability zones. cidr_subnet1, cidr_subnet2, availability_zone are the variables. The rage of IP’s is given to its respective subnets. Also you need to mention your default region availability_zone block above.

Creating VPC

Amazon Virtual Private Cloud (Amazon VPC) enables you to launch AWS resources into a virtual network that you’ve defined. This virtual network closely resembles a traditional network that you’d operate in your own data center, with the benefits of using the scalable infrastructure of AWS.

resource "aws_vpc" "vpc" {
cidr_block = "${var.cidr_vpc}"
enable_dns_support = true
enable_dns_hostnames = true
tags ={
Environment = "${var.environment_tag}"
Name= "TerraformVpc"
}
}

the above code will create VPC. The above vpc block accept the cidr_block. you can enable dns support given to ec2 instances after launching. You can give the tag name to the above vpc. Here i have given “TerraformVpc”.

vpc

Creating Subnets

Subnetwork or subnet is a logical subdivision of an IP network. The practice of dividing a network into two or more networks is called subnetting. AWS provides two types of subnetting one is Public which allow the internet to access the machine and another is private which is hidden from the internet.

resource "aws_subnet" "subnet_public1_Lab1" {
vpc_id = "${aws_vpc.vpc.id}"
cidr_block = "${var.cidr_subnet1}"
map_public_ip_on_launch = "true"
availability_zone = "ap-south-1a"
tags ={
Environment = "${var.environment_tag}"
Name= "TerraformPublicSubnetLab1"
}
}
resource "aws_subnet" "subnet_public1_Lab2" {
vpc_id = "${aws_vpc.vpc.id}"
cidr_block = "${var.cidr_subnet2}"
map_public_ip_on_launch = "true"
availability_zone = "ap-south-1b"
tags ={
Environment = "${var.environment_tag}"
Name= "TerraformPublicSubnetLab2"
}
}

The above code will create subnet. The subnet block accepts vpc_id, cidr_block, map_public_ip_on_launch(assign public IP to instance after launching), availability_zone. You can give tags for easy recognition after creating subnets.

subnets

Creating Security Group

A security group acts as a virtual firewall for your EC2 instances to control incoming and outgoing traffic. … If you don’t specify a security group, Amazon EC2 uses the default security group. You can add rules to each security group that allow traffic to or from its associated instances.

resource "aws_security_group" "TerraformSG" {
name = "TerraformSG"
vpc_id = "${aws_vpc.vpc.id}"
ingress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags ={
Environment = "${var.environment_tag}"
Name= "TerraformSG"
}
}

The above block of code will create a security group. It accepts the respective parameters name(name of the security group), vpc_id, ingress(inbound rule) here i have given “all traffic”. “-1” means all. from_port= 0 to_port=0 (0.0.0.0) that means we have disabled the firewall. you need to mention the range of IP’s you want have in inbound rule.

The egress rule is the outbound rule. I have taken (0.0.0.0/0) means all traffic i can able to access from this outbound rule. You can give the name of respective Security Group.

Security Group

Creating InternetGateway

An internet gateway serves two purposes: to provide a target in your VPC route tables for internet-routable traffic, and to perform network address translation (NAT) for instances that have been assigned public IPv4 addresses.

resource "aws_internet_gateway" "gw" {
vpc_id = "${aws_vpc.vpc.id}"
tags = {
Name = "Terraform_IG"
}
}

the above code will create you respective internet gateway. you need to specify on which vpc you want to create internet gateway. Also you can give name using tag block.

Internet Gateway

Creating Route Table

A route table contains a set of rules, called routes, that are used to determine where network traffic from your subnet or gateway is directed.

resource "aws_route_table" "r" {
vpc_id = "${aws_vpc.vpc.id}"
route {
cidr_block = "0.0.0.0/0"
gateway_id = aws_internet_gateway.gw.id
}
tags = {
Name = "TerraformRoteTable"
}
}

You need to create a route table for the internet gateway you have created above. Here, i am allowing all the IP rage. So my ec2 instances can connect to the internet world. we need to give the vpc_id so that we can easily allocate the routing table to respective vpc. You can specify the name of the routing table using tag block.

Route Table

Route Table Association To Subnets

We need to connect the route table created for internet gateways to the respective subnets inside the vpc.

resource "aws_route_table_association" "public" {
subnet_id = "${aws_subnet.subnet_public1_Lab1.id}"
route_table_id = "${aws_route_table.r.id}"
}

You need to specify which subnets you want to take to the public world. As if the subnets gets associated(connected) to the Internet Gateway it will be a public subnet. But if you don’t associate subnets to the Internet gateway routing table then it will known as private subnets. The instances which is launched in the private subnet is not able to connected from outside as it will not having public IP, also it will not be connected to the Internet Gateway.

You need to specify the routing table for the association of the subnets. If you don’t specify the routing table in the above association block then subnet will take the vpc’s route table. So if you want to take the ec2 instances to the public world then you need to specify the router in the above association block. Its upon you which IP range you want you ec2 instances to connect. Here i have give 0.0.0.0/0 means i can access any thing from the ec2 instances.

Creating Ec2 Instances

An EC2 instance is nothing but a virtual server in Amazon Web services terminology. It stands for Elastic Compute Cloud. It is a web service where an AWS subscriber can request and provision a compute server in AWS cloud. … AWS provides multiple instance types for the respective business needs of the user.

resource "aws_instance" "testInstance1" {
ami = "ami-052c08d70def0ac62"
instance_type = "t2.micro"
subnet_id = "${aws_subnet.subnet_public1_Lab1.id}"
vpc_security_group_ids = ["${aws_security_group.TerraformSG.id}"]
key_name = "ansiblekey"
tags ={
Environment = "${var.environment_tag}"
Name= "Redhat"
}
}
resource "aws_instance" "testInstance2" {
ami = "ami-0a4a70bd98c6d6441"
instance_type = "t2.micro"
subnet_id = "${aws_subnet.subnet_public1_Lab2.id}"
vpc_security_group_ids = ["${aws_security_group.TerraformSG.id}"]
key_name = "ansiblekey"
tags ={
Environment = "${var.environment_tag}"
Name= "Ubuntu"
}
}

In the above code, it will create a new instance. It accepts the following parameters.

ami-> image name
instance_type-> type of instances
subnet_id-> In which subnet you want to launch instance
vpc_security_group_ids-> Security Group ID
key_name-> key name of instance
Name-> name of the instance
Ec2 Instances

Now we need to run the terraform code to get the above output

Initializing terraform code

The terraform init command is used to initialize a working directory containing Terraform configuration files. This is the first command that should be run after writing a new Terraform configuration or cloning an existing one from version control. It is safe to run this command multiple times.

terraform-init
terraform initializing

Running Terraform Apply

The terraform apply command is used to apply the changes required to reach the desired state of the configuration, or the pre-determined set of actions generated by a terraform plan execution plan.

terraform-apply
terraform apply

You need to give yes/no for for making the changes in AWS.

yes/no

As you can see that 9 resources has been added to the AWS. The above is the changes done in AWS. You can also destroy the changes you made using following command.

What we have created its looks like something below architecture.

VPC Architecture
terraform destroy

Now the instances is launched successfully, So we can now use Ansible for the Configuration of Apache webserver in that respective instance.

Apache Webserver Configuration Using Ansible

Ansible is a software tool that provides simple but powerful automation for cross-platform computer support. It is primarily intended for IT professionals, who use it for application deployment, updates on workstations and servers, cloud provisioning, configuration management, intra-service orchestration, and nearly anything a systems administrator does on a weekly or daily basis. Ansible doesn’t depend on agent software and has no additional security infrastructure, so it’s easy to deploy.

How Ansible works

In Ansible, there are two categories of computers: the control node and managed nodes. The control node is a computer that runs Ansible. There must be at least one control node, although a backup control node may also exist. A managed node is any device being managed by the control node.

Ansible works by connecting to nodes (clients, servers, or whatever you’re configuring) on a network, and then sending a small program called an Ansible module to that node. Ansible executes these modules over SSH and removes them when finished. The only requirement for this interaction is that your Ansible control node has login access to the managed nodes. SSH Keys are the most common way to provide access, but other forms of authentication are also supported.

Ansible playbooks

While modules provide the means of accomplishing a task, the way you use them is through an Ansible playbook. A playbook is a configuration file written in YAML that provides instructions for what needs to be done in order to bring a managed node into the desired state. Playbooks are meant to be simple, human-readable, and self-documenting. They are also idempotent, meaning that a playbook can be run on a system at any time without having a negative effect upon it. If a playbook is run on a system that’s already properly configured and in its desired state, then that system should still be properly configured after a playbook runs.

Modules in Ansible

Modules (also referred to as “task plugins” or “library plugins”) are discrete units of code that can be used from the command line or in a playbook task. Ansible executes each module, usually on the remote managed node, and collects return values.

Variables in Ansible:

Ansible uses variables to manage differences between systems. With Ansible, you can execute tasks and playbooks on multiple different systems with a single command. … You can define these variables in your playbooks, in your inventory, in re-usable files or roles, or at the command line.

Ansible Installation In below Slides

Ansible Installation

Dynamic Inventory for AWS

Dynamic inventory is an ansible plugin that makes an API call to AWS to get the instance information in the run time. It gives you the ec2 instance details dynamically to manage the AWS infrastructure.

Create a directory then add that directory in the configuration file of ansible

mkdir  Dynamic_Inventory_Data_Base

You can give any name to the directory.

Ansible Configuration File

As you can see the configuration file of ansible, In the default section there is inventory which will take the data of the instances launched on AWS.

As Ansible works on the ssh protocol for linux OS. So we need to disable the ssh key, as when you do ssh it asks you for yes/no. So to disable that you need to write host_key_checking=false.

To avoid some warnings given by the command we can disable that using command_warnings=false

To login in that newly launched OS we need to provide its respective key. Note: key with .pem format file will work not with the .ppk format. Also you need to give permission to that key in the read mode.

Permissions in linux
0 → No modes
1 → execute mode
2→ write mode
3 → write execute mode
4 → read mode
5 → read write mode
6 → read write mode
7 → read write and execute mode.

So, here i am giving the execute permission to that key.

chmod   400   keyname.pem

The code for the configuration file of ansible is below.

[defaults]
inventory=location
host_key_checking=false
ask_pass=false
private_key_file=/path/key.pem
[privilege_escalation]
become = true
become_method = sudo
become_user = root
become_ask_pass = false

As i told earlier to create a directory eg. Dynamic_Inventory_Data_Base. So you need to install the dynamic inventory module inside the folder you have created. First you have install the wget software.

yum install wget

After downloading this you need to download the dynamic inventory module i.e ec2.py and ec.ini. Both file is interdependent. The ec2.ini stores the information of that AWS account and ec2.py run that modules and collects the information of instances launched on AWS.

This 👇 command will create a ec2.py dynamic inventory file 

wget https://raw.githubusercontent.com/ansible/ansible/stable-2.9/contrib/inventory/ec2.py

This 👇 command will create a ec2.ini dynamic inventory file

wget https://raw.githubusercontent.com/ansible/ansible/stable-2.9/contrib/inventory/ec2.ini


You need to make executable those two above files

chmod +x ec2.py

chmod +x ec2.ini
installing ec2.py and ec2.ini

Making ec2.py and ec2.ini file in executable mode

Making ec2.py and ec2.ini file in executable mode

Now you need to update the file of ec2.py. The first line of ec2.py include python environment. we need to update with the python3.

python module error
#!/usr/bin/python3
ec2.py

Now, we need to update the regions and AWS IAM account access_key and secret_access_key inside ec2.ini file.

updating region
updating access_key and secret_access_key

Now you need to export the region, acess_key, secret_access_key with the command line.

After that export all these commands.
export AWS_REGION='ap-south-1'
export AWS_ACCESS_KEY=XXXX
export AWS_ACCESS_SECRET_KEY=XXXX

Now all set, we can check the number of hosts having on the AWS.

ansible  all  --list-hosts
2 hosts available

Creating Roles From Ansible Galaxy

Roles let you automatically load related vars_files, tasks, handlers, and other Ansible artifacts based on a known file structure. Once you group your content in roles, you can easily reuse them and share them with other users.

ansible-galaxy  init redhat_webserver_roleansible-galax   init ubuntu_webserver_role
roles created successfully

Now we need to write the tasks and variables inside below its respective section

contents of the role

Creating Tasks of redhat_webserver_role

---
# tasks file for redhat_webserver_role
- name: "Install dependent webserver software"
package:
name: "{{ package_name }}"
state: present
- name: "Copying The Webpages"
template:
dest: "{{ doc_root }}/index.html"
src: my.html
notify: RestartService
- name: "Starting Httpd Service"
service:
name: "{{ service_name }}"
state: started

In the above task playbook, i have used the package module to download the httpd apache webserver. Then i used the template module to copy the content of the webserver file. The website code is there in the my.html and i am copying that code to /var/www/html (document root)At the last i have used service module to start the httpd service.

Managing services using Handlers

Handlers are just like normal tasks in an Ansible playbook but they run only when if the Task contains a “notify” directive. It also indicates that it changed something. Let take a example, it is useful for secondary actions that might be required after running a Task.

handlers
---
# handlers file for redhat_webserver_role
- name: "RestartService"
service:
name: "{{ package_name }}"
state: restarted

As you can see in the above handlers, the service has restarted state. This service will notified from the tasks playbook using the notified keyword. So if the content of the webpage changes the handlers will run and restart the webserver.

Similarly, you need to write the playbook for ubuntu_webserver_role. The tasks and handlers will same, as i am configuring the webserver in both the system i.e on the redhat as well as in the ubuntu instance.

Main Playbook Code

Main playbook are the playbook which contains all the code or we can say it contains all the respective playbooks, roles, variables, etc.

main_playbook.yml
- hosts: "{{ redhat_ip }}"
remote_user: ec2-user
vars_files:
- ipaddress.yml
- "{{ ansible_facts['distribution'] }}-{{ ansible_facts['distribution_major_version'] }}.yml"
roles:
- role: "/root/dynamic_vars/webserver_roles/redhat_webserver_role"
- hosts: "{{ ubuntu_ip }}"
remote_user: ubuntu
vars_files:
- ipaddress.yml
- "{{ ansible_facts['distribution'] }}-{{ ansible_facts['distribution_major_version'] }}.yml"
roles:
- role: "/root/dynamic_vars/webserver_roles/ubuntu_webserver_role"

As you can see in the above main playbook, i have imported the restecive roles which i have created earlier. You need to give the right path of the roles in the role module in the above code.

The vars_files of ipaddress is also given there in the above code. it conatines the IP address of the respective instances.

ipaddress

We have different hosts in the above code with their respective user. As i have launched the ec2 instances using the images of redhat and ubuntu. So we need to provide their respective remote_user. For the ec2 instance the user will be ec2-user whereas user for ubuntu will be ubuntu.

You have noticed in the above vars_files module that there is some facts written in that section. The meaning of the facts is written below.

By running the redhat_webserver_role: 
Output of ansible_facts['distribution'] = 'RedHat'
output of ansible_facts['distribution_major_version'] = '8'
Total output will be : Redhat-8.yml
By running the ubuntu_webserver_role:
Output of ansible_facts['distribution'] = 'Ubuntu'
output of ansible_facts['distribution_major_version'] = '20'
Total output will be : Ubuntu-20.yml

Now, Redhat-8.yml, Ubuntu-20.yml will be acted as a vars_files and can be imported easily. Now this will be done dynamically. This will manage the complex environment when you have more numbers of operating systems then ansible will take the files with its respective operating systems.

dynamic vars

Running The Main Playbook

ansible-playbook   mail_playbook.yml

The output will be like this when you run the above main_playbook

Running main_playbook.yml

Now using the public IP we can see the webpage of the respective instances.

RedHat Ec2 Instance’s website deployed

As you can see that the webpage from the redhat ec2 instances has been successfully deployed. You can check the below message i.e “Profile Page From AWS Ec2 Instance”.

Ubuntu Ec2 Instance’s website deployed

As you can see that the webpage from the ubuntu instances has been successfully deployed. You can check the below message i.e “Profile Page From AWS Ubuntu Instance”.

Code Link On Github Below

https://github.com/amit17133129/Managing-Complex-Environment-Using-Terraform-And-Ansible-On-AWS

Note: if the github code link is not working then you can copy the url and paste in the respective browser. Copy the below url with the full stop at the end.

https://github.com/amit17133129/Managing-Complex-Environment-Using-Terraform-And-Ansible-On-AWS.

Hope you had Find this article interesting. make sure to give a clap 👏👏.

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Amit Sharma

Amit Sharma

RHCE || Aspiring MLOps-DevOps Engineer || Open to Work || MLOPS 🧠 || DEVOPS 🐳☸👩🏻‍🍳🦊 || HYBRID ☁️ || Terraform || Ansible || ML || DL || GitLab || Jenkins|