Part 3 of 3: HumanGov: Ansible is the Answer! | Ansible | Terraform | Python | Git | AWS CodeCommit | AWS Cloud9 | AWS IAM | AWS EC2 | AWS DynamoDB | AWS S3


1 of 29. Open AWS Cloud9

2 of 29. Modify modules/

Add local-exec provisioners to EC2 resource

You will add three provisioners:
One just updates the SSH keys in host file
Another updates the inventory file with entries for the host information
The final one deletes entries based on instance identifier being destroyed.

provisioner "local-exec" { command = "sleep 30; ssh-keyscan ${self.private_ip} >> ~/.ssh/known_hosts" } provisioner "local-exec" { command = "echo ${var.state_name} id=${} ansible_host=${self.private_ip} ansible_user=ubuntu us_state=${var.state_name} aws_region=${var.region} aws_s3_bucket=${aws_s3_bucket.state_s3.bucket} aws_dynamodb_table=${} >> /etc/ansible/hosts" } provisioner "local-exec" { command = "sed -i '/${}/d' /etc/ansible/hosts" when = destroy }

The file should now look like this

resource "aws_security_group" "state_ec2_sg" { name = "humangov-${var.state_name}-ec2-sg" description = "Allow traffic on ports 80 and 5000, permit Cloud9" ingress { from_port = 80 to_port = 80 protocol = "tcp" cidr_blocks = [""] } ingress { from_port = 5000 to_port = 5000 protocol = "tcp" cidr_blocks = [""] } ingress { from_port = 0 to_port = 0 protocol = "-1" security_groups = ["sg-05b2e6f0305ae4271"] } egress { from_port = 0 to_port = 0 protocol = "-1" cidr_blocks = [""] } tags = { Name = "humangov-${var.state_name}" } } resource "aws_instance" "state_ec2" { ami = "ami-007855ac798b5175e" instance_type = "t2.micro" key_name = "humangov-ec2-key" vpc_security_group_ids = [] iam_instance_profile = provisioner "local-exec" { command = "sleep 30; ssh-keyscan ${self.private_ip} >> ~/.ssh/known_hosts" } provisioner "local-exec" { command = "echo ${var.state_name} id=${} ansible_host=${self.private_ip} ansible_user=ubuntu us_state=${var.state_name} aws_region=${var.region} aws_s3_bucket=${aws_s3_bucket.state_s3.bucket} aws_dynamodb_table=${} >> /etc/ansible/hosts" } provisioner "local-exec" { command = "sed -i '/${}/d' /etc/ansible/hosts" when = destroy } tags = { Name = "humangov-${var.state_name}" } } resource "aws_dynamodb_table" "state_dynamodb" { name = "humangov-${var.state_name}-dynamodb" billing_mode = "PAY_PER_REQUEST" hash_key = "id" attribute { name = "id" type = "S" } tags = { Name = "humangov-${var.state_name}" } } resource "random_string" "bucket_suffix" { length = 7 special = false upper = false } resource "aws_s3_bucket" "state_s3" { bucket = "humangov-${var.state_name}-s3-${random_string.bucket_suffix.result}" tags = { Name = "humangov-${var.state_name}" } } resource "aws_s3_bucket_ownership_controls" "state_s3" { bucket = rule { object_ownership = "BucketOwnerPreferred" } } resource "aws_s3_bucket_acl" "state_s3" { depends_on = [aws_s3_bucket_ownership_controls.state_s3] bucket = acl = "private" } resource "aws_iam_role" "s3_dynamodb_full_access_role" { name = "humangov-${var.state_name}-s3_dynamodb_full_access_role" assume_role_policy = <<EOF { "Version": "2012-10-17", "Statement": [ { "Action": "sts:AssumeRole", "Principal": { "Service": "" }, "Effect": "Allow", "Sid": "" } ] } EOF tags = { Name = "humangov-${var.state_name}" } } resource "aws_iam_role_policy_attachment" "s3_full_access_role_policy_attachment" { role = policy_arn = "arn:aws:iam::aws:policy/AmazonS3FullAccess" } resource "aws_iam_role_policy_attachment" "dynamodb_full_access_role_policy_attachment" { role = policy_arn = "arn:aws:iam::aws:policy/AmazonDynamoDBFullAccess" } resource "aws_iam_instance_profile" "s3_dynamodb_full_access_instance_profile" { name = "humangov-${var.state_name}-s3_dynamodb_full_access_instance_profile" role = tags = { Name = "humangov-${var.state_name}" } }

3 of 29. Modify Terraform module variables file modules/aws_humangov_infrastructure/

Add the variable region

variable "state_name" { description = "The name of the US State" } variable "region" { default = "us-east-1" }

4 of 29. Create an empty Ansible inventory file at /etc/ansible/hosts

Make sure to set ownership of the hosts and ansible folders. The file will initially be empty.

sudo mkdir /etc/ansible sudo touch /etc/ansible/hosts sudo chown ec2-user:ec2-user /etc/ansible/hosts sudo chown -R ec2-user:ec2-user /etc/ansible cat /etc/ansible/hosts

5 of 29. Provision the infrastructure

The /etc/ansible/hosts file should be populated now.

cd ~/environment/human-gov-infrastructure/terraform terraform plan terraform apply cat /etc/ansible/hosts

6 of 29. Commit and push changes to the local Git repository

git status git add . git status git commit -m "Added variable and provisioners to Terraform module aws_humangov_infrastructure/"

7 of 29. In the following steps, will create Ansible role "humangov_webapp" with the below structure

humangov_webapp/ ├── defaults │ └── main.yml ├── handlers │ └── main.yml ├── tasks │ └── main.yml ├── templates │ ├── humangov.service.j2 │ └── nginx.conf.j2 └── vars └── main.yml

8 of 29. Setup directory structure for the Ansible role "humangov_webapp"

Note: Ansible galaxy was used to setup this structure in the prior article.

cd ~/environment/human-gov-infrastructure mkdir ansible cd ansible mkdir -p roles/humangov_webapp/tasks mkdir -p roles/humangov_webapp/handlers mkdir -p roles/humangov_webapp/templates mkdir -p roles/humangov_webapp/defaults mkdir -p roles/humangov_webapp/vars mkdir -p roles/humangov_webapp/files touch roles/humangov_webapp/tasks/main.yml touch roles/humangov_webapp/handlers/main.yml touch roles/humangov_webapp/templates/nginx.conf.j2 touch roles/humangov_webapp/templates/humangov.service.j2 touch roles/humangov_webapp/defaults/main.yml touch roles/humangov_webapp/vars/main.yml touch deploy-humangov.yml

9 of 29. Create the Ansible config file (ansible.cfg)

The file will be placed in the "ansible" folder. This disables deprecation warnings. Note: You probably don't want to do this on real infrastructure.

[defaults] deprecation_warnings = False

10 of 29. Use Ansible Ping module against the created instance(s)

ansible all -m ping -e "ansible_ssh_private_key_file=/home/ec2-user/environment/humangov-ec2-key.pem"

11 of 29. Modify defaults/main.yml

These are defaults for the role variables

--- username: ubuntu project_name: humangov project_path: "/home/{{ username }}/{{ project_name }}" source_application_path: /home/ec2-user/environment/human-gov-application/src

12 of 29. Modify handlers/main.yml

These handlers will be triggered by tasks

yaml --- - name: Restart Nginx systemd: name: nginx state: restarted become: yes - name: Restart humangov systemd: name: humangov state: restarted become: yes

13 of 29. Modify tasks/main.yml

Tasks are defined here.

--- - name: Update and upgrade apt packages apt: upgrade: dist update_cache: yes become: yes - name: Install required packages apt: name: - nginx - python3-pip - python3-dev - build-essential - libssl-dev - libffi-dev - python3-setuptools - python3-venv - unzip state: present become: yes - name: Ensure UFW allows Nginx HTTP traffic ufw: rule: allow name: 'Nginx HTTP' become: yes - name: Create project directory file: path: "{{ project_path }}" state: directory owner: "{{ username }}" group: "{{ username }}" mode: '0755' become: yes - name: Create Python virtual environment command: cmd: python3 -m venv {{ project_path }}/humangovenv creates: "{{ project_path }}/humangovenv" - name: Copy the application zip file to the destination copy: src: "{{ source_application_path }}/" dest: "{{ project_path }}" owner: "{{ username }}" group: "{{ username }}" mode: '0644' become: yes - name: Unzip the application zip file unarchive: src: "{{ project_path }}/" dest: "{{ project_path }}" remote_src: yes notify: Restart humangov become: yes - name: Install Python packages from requirements.txt into the virtual environment pip: requirements: "{{ project_path }}/requirements.txt" virtualenv: "{{ project_path }}/humangovenv" - name: Create systemd service file for Gunicorn template: src: humangov.service.j2 dest: /etc/systemd/system/{{ project_name }}.service notify: Restart humangov become: yes - name: Enable and start Gunicorn service systemd: name: "{{ project_name }}" enabled: yes state: started become: yes - name: Remove the default nginx configuration file file: path: /etc/nginx/sites-enabled/default state: absent become: yes - name: Change permissions of the user's home directory file: path: "/home/{{ username }}" mode: '0755' become: yes - name: Configure Nginx to proxy requests template: src: nginx.conf.j2 dest: /etc/nginx/sites-available/{{ project_name }} become: yes - name: Enable Nginx configuration file: src: /etc/nginx/sites-available/{{ project_name }} dest: /etc/nginx/sites-enabled/{{ project_name }} state: link notify: Restart Nginx become: yes

14 of 29. Modify templates/humangov.service.j2

Jinja2 template for Gunicorn systemd service

Many of the environment variables are coming from /etc/ansible/hosts

[Unit] Description=Gunicorn instance to serve {{ project_name }} [Service] User={{ username }} Group=www-data WorkingDirectory={{ project_path }} Environment="US_STATE={{ us_state }}" Environment="PATH={{ project_path }}/humangovenv/bin" Environment="AWS_REGION={{ aws_region }}" Environment="AWS_DYNAMODB_TABLE={{ aws_dynamodb_table }}" Environment="AWS_BUCKET={{ aws_s3_bucket }}" ExecStart={{ project_path }}/humangovenv/bin/gunicorn --workers 1 --bind unix:{{ project_path }}/{{ project_name }}.sock -m 007 {{ project_name }}:app [Install]

15 of 29. Modify templates/nginx.conf.j2

Jinja2 template for Nginx configuration

server { listen 80; server_name humangov www.humangov; location / { include proxy_params; proxy_pass http://unix:{{ project_path }}/{{ project_name }}.sock; } }

16 of 29. Modify deploy-humangov.yml

Add the "humangov_webapp" role to the playbook

- hosts: all roles: - humangov_webapp

17 of 29. Run the "deploy-humangov.yml" Ansible Playbook

ansible-playbook deploy-humangov.yml -e "ansible_ssh_private_key_file=/home/ec2-user/environment/humangov-ec2-key.pem"

18 of 29. Test the HumanGov App

Connect to the Public DNS name. To avoid checking the EC2 console, you can run a quick AWS CLI to pull that information.

aws ec2 describe-instances \ --query 'Reservations[*].Instances[*].{Instance:InstanceId,Name:Tags[?Key==`Name`]|[0].Value,PublicDNS:PublicDnsName,PublicIP:PublicIpAddress,PrivateIP:PrivateIpAddress,State:State.Name}' \ --output table

19 of 29. Add more states./p>

Modify /home/ec2-user/environment/human-gov-infrastructure/terraform/

variable "states" { description = "The list of state names" default = ["california","texas","missouri"] }

20 of 29. Provision the new states infrastructure using Terraform

This should add the additional states. The /etc/ansible/hosts file should be updated with the additional states.

cd ~/environment/human-gov-infrastructure/terraform terraform plan terraform apply cat /etc/ansible/hosts

21 of 29. Re-run the playbook "deploy-humangov.yml"

cd ~/environment/human-gov-infrastructure/ansible ansible all -m ping -e "ansible_ssh_private_key_file=/home/ec2-user/environment/humangov-ec2-key.pem" ansible-playbook deploy-humangov.yml -e "ansible_ssh_private_key_file=/home/ec2-user/environment/humangov-ec2-key.pem"

22 of 29. Check the websites

Instead of checking the AWS CLI for the DNS names, we did record this information in the terraform output, so let's check there.

cd ~/environment/human-gov-infrastructure/terraform terraform output

23 of 29. Test the site, add employee to one state, and check another

I'll add an employee to California, but this employee won't be viewable in the other states. Just to demonstrate that separate databases are being used.

24 of 29. Commit changes

cd ~/environment/human-gov-infrastructure git status git add . git status git commit -m "Ansible configuration 1st commit plus file changed and added satees Missouri and Texas"

25 of 29. Re-enable temporary credentials on Cloud9

If you recall, the reason we turned off the temporary credentials in a prior article was to facilitate adding roles on AWS [which the temporary credentials could not do]. Now that all the infrastructure has been created, we can revert. Also, this can get us back into seamless interoperability with AWS CodeCommit, to enable pushing changes to the cloud repository.

Settings > AWS Settings > Credentials > Turning ON the option “AWS managed temporary credentials”

If you check the CodeComit repository for infrastructure, you should see your latest commits.

git push -u origin

26 of 29. In addition, push the application source to the AWS Code Commit remote repository

Note: Here, I will pull rebase and then push, because your local source is not up-to-date with changes at the remote repository.

cd ~/environment/human-gov-application git pull --rebase git push -u origin

27 of 29. A couple of evidence screenshots

Screenshot: Show all the human-gov-infrastructure commits
Screenshot: Show all the human-gov-application commits
Screenshot: Show at least three different states with EC2 instances running (including the IP address)

28 of 29. Create new access key.

We are preparing to destroy the infrastructure, but the Cloud9 temporary credentails will not be able to remove the roles we created earlier via terraform. While before we used the method of "aws configure" there is also a supported "export" option to set the environment variables for the access key and secret [without disabling the temporary credentials]. Note: we'll have to re-create access keys to do this [unless you saved it or something].

'cloud9-user' > Security credentials > Create access key 1. Access key best practices & alternatives Command Line Interface (CLI) Checkbox: I understand the above recommendation and want to proceed to create an access key. [NEXT] 2. Set description tag - optional [Create access key] 3. Retrieve access keys

bash sudo chown -R ec2-user:ec2-user /etc/ansible

29 of 29. Implement the environment variables and destroy the infrastructure

We can export the key and secret, and then destroy the infrastructure [to include the IAM roles]

Note: The implementation currently has a bug in it: it won't be able to remove the S3 bucket that is not empty. So, I cheated a little by going back to that S3 bucket for California and deleting the object in there. So, if you do get an error about the bucket not being empty, then empty it. Another possible condition you may have is a complaint about permission denied, with regards to the provisioner acting on the /etc/ansible stuff. I should have included the permissions for that earlier in this posting. Scroll up and find it.

export AWS_ACCESS_KEY_ID="AKIAXKHBMWXLDAQH4FJK" export AWS_SECRET_ACCESS_KEY="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" cd ~/environment/human-gov-infrastructure/terraform terraform destroy terraform show


AWS Cloud9 Documentation

AWS CodeCommit tutorial for AWS Cloud9

Amazon Elastic Compute Cloud Documentation

Security groups

Amazon DynamoDB Documentation

Temporary Credentials

Documentation | Terraform | HashiCorp Developer

Provisioners | Terraform | HashiCorp Developer

Define input variables | Terraform | HashiCorp Developer

Command: destroy | Terraform | HashiCorp Developer

Ansible Documentation

How to build your inventory -- Ansible Documentation

Roles -- Ansible Documentation module – Try to connect to host, verify a usable python and return pong on success -- Ansible Documentation

Ansible playbooks -- Ansible Documentation

Handlers: running operations on change -- Ansible Documentation

Git - Reference

Jinja - Jinja Documentation (3.1.x)

Template Designer Documentation - Jinja Documentation (3.1.x)

3.12.1 Documentation

EnvironmentVariables - Community Help Wiki

UncomplicatedFirewall - Ubuntu Wiki

How To Serve Flask Applications with Gunicorn and Nginx on Ubuntu 22.04

How To Install Nginx on Ubuntu 22.04


Popular posts from this blog

Containing the Chaos! | A Three-Part Series Demonstrating the Usefulness of Containerization to HumanGov

Ansible is the Answer! | A Three-Part Series Demonstrating the Usefulness of Ansible to HumanGov