Category: DEVOPS


AWS/Terraform/Ansible/OpenShift – Provision a Lightsail instance and further configure it using Infrastructure as Code

In this article we will Provision a LightSail host with docker/docker-compose on it using Terraform and install/initialize OpenShift Origin on it using Ansible.

OpenShift is Red Hat’s containerization platform which utilizes Kubernetes. Origin (what we will be working with here) is the opensource implementation of it.

We will use ‘myweb’ as an example in this article, using the same base path of ‘dev’ that was previously created, the container-admin group and using ~/.local/bin for the binaries.

Please ensure you have gone through the previous Terraform, Ansible, related preceding articles and ‘Get started with Lightsail for free’.

–>
Edit the IAM Policy “AllowLightsail”:
AWS UI Console -> Services -> Security, Identity, & Compliance -> IAM -> Policies -> AllowLightsail -> Edit Policy -> JSON ->

Append lightsail:OpenInstancePublicPorts and lightsail:CloseInstancePublicPorts after lightsail:ReleaseStaticIp.

It will look like this:

                  "lightsail:ReleaseStaticIp",
                  "lightsail:OpenInstancePublicPorts",
                  "lightsail:CloseInstancePublicPorts"

Review Policy -> Save Changes

Go in to the dev directory/link located within your home directory:

$ cd ~/dev

Upgrade the AWS CLI on your host:

$ pip3 install awscli --upgrade --user && chmod 754 ~/.local/bin/aws

Grab the latest version of Terraform:

$ wget https://releases.hashicorp.com/terraform/0.12.20/terraform_0.12.20_linux_amd64.zip

Unzip it to ~/.local/bin and set permissions accordingly on it:

$ unzip terraform_0.12.20_linux_amd64.zip -d ~/.local/bin && chmod 754 ~/.local/bin/terraform

Change to the myweb directory inside terraform:

$ cd terraform/myweb

Upgrade our code to 12.20 (type yes and enter when prompted):

$ terraform 0.12upgrade

Change our instance from a micro to a medium so it will have sufficient resources to run OpenShift Origin and related:

$ sed -i s:micro_2_0:medium_2_0: lightsail.tf

Output the Public IP of the Provisioned host (along with connection parameters and variables) in to a file which we will feed in to an Ansible playbook run:

$ cat << 'EOF' >> output.tf
>
> resource "local_file" "hosts" {
>   content              = "[vps]\n${aws_lightsail_static_ip.myweb.ip_address} ansible_connection=ssh ansible_user=ubuntu ansible_ssh_private_key_file=~/.ssh/myweb instance=${aws_lightsail_instance.myweb.name}"
>   filename             = "${path.module}/../../ansible/hosts"
>   directory_permission = 0754
>   file_permission      = 0664
> }
> EOF 

Amend an item from the user_data script:

$ sed -i 's:sudo apt-key add -:apt-key add -:' scripts/install.sh

Initialize the directory/refresh module(s):

$ terraform init

Run a dry-run to see what will occur:

$ terraform plan

Provision:

$ terraform apply -auto-approve

Create a work folder for an Ansible playbook:

$ cd ../../ansible
$ mkdir -p openshift/scripts && cd openshift

Create an Ansible playbook which will install/initialize OpenShift Origin on our provisioned host:

$ cat << 'EOF' > openshift.yml 
> # Install, initialize OpenShift Origin and create a destroy routine for it
> ---
> - hosts: vps
>   connection: local
>
>   vars:
>     openshift_directory: /home/ubuntu/.local/etc/openshift
>
>   tasks:
>     - name: Discover Services
>       service_facts:
>
>     - name: Check if openshift directory exists
>       stat:
>         path: "{{ openshift_directory }}"
>       register: openshift_dir
>
>     - name: Open Firewall Ports
>       delegate_to: localhost
>       command: bash -c './scripts/firewall.sh open {{ hostvars[groups['vps'][0]].instance }}'
>       when:
>         - "'docker' in services"
>         - openshift_dir.stat.exists == False
>  
>     - name: Copy and Run install
>       environment:
>         PATH: "{{ ansible_env.PATH}}:{{ openshift_directory }}/../../bin"
>       args:
>         executable: /bin/bash
>       script: "./scripts/install.sh {{ ansible_ssh_host }}"
>       when:
>         - "'docker' in services"
>         - openshift_dir.stat.exists == False
>
>     - debug: msg="Please install docker to proceed."
>       when: "'docker' not in services"
>
>     - debug: msg="Install script has already been completed.  Run this playbook with the destroy tag, then run once again normally to re-intialize openshift."
>       when: openshift_dir.stat.exists == True
>
>     - name: Destroy
>       become: yes
>       environment:
>         PATH: "{{ ansible_env.PATH}}:{{ openshift_directory }}/../../bin"
>       args:
>         executable: /bin/bash
>       shell: "cd {{ openshift_directory }} && oc cluster down && cd ../ && rm -r {{ openshift_directory }}/../../../.kube {{ openshift_directory }}"
>       tags: [ 'never', 'destroy' ]
>
>     - name: Close Firewall Ports
>       delegate_to: localhost
>       command: bash -c './scripts/firewall.sh close {{ hostvars[groups['vps'][0]].instance }}'
>       tags: [ 'never', 'destroy' ]
> EOF    

Create a shell script which will pull the latest release of client tools from GitHub, place the needed binaries in ~/.local/bin, set insecure registry on Docker and initialize:

$ cat << 'EOF' > scripts/install.sh
> #!/bin/bash
> [[ -z $* ]] && { echo "Please specify a Public IP or Host/Domain name." && exit 1; }
> # Fetch and Install
> file_url="$(curl -sL https://github.com/openshift/origin/releases/latest | grep "download.*client.*linux-64" | cut -f2 -d\" | sed 's/^/https:\/\/github.com/')"
> [[ -z $file_url ]] && { echo "The URL could not be obtained.  Please try again shortly." && exit 1; }
> file_name="$(echo $file_url | cut -f9 -d/)"
> if [[ ! -f $file_name ]]; then
>         curl -sL $file_url --output $file_name
>         folder_name="$(tar ztf $file_name 2>/dev/null | head -1 | sed s:/.*::)"
>         [[ -z $folder_name ]] && { echo "The archive could not be read.  Please try again." && rm -f $file_name && exit 1; }
>         tar zxf $file_name
>         mv $folder_name/oc $folder_name/kubectl $HOME/.local/bin && rm -r $folder_name
>         chmod 754 $HOME/.local/bin/oc $HOME/.local/bin/kubectl
> fi
> # Docker insecure
> [[ $(grep insecure /etc/docker/daemon.json &>/dev/null; echo $?) -eq 2 ]] && redirect=">"
> [[ $(grep insecure /etc/docker/daemon.json &>/dev/null; echo $?) -eq 1 ]] && redirect=">>"
> [[ $(grep insecure /etc/docker/daemon.json &>/dev/null; echo $?) -eq 0 ]] || { sudo bash -c "cat << 'EOF' $redirect /etc/docker/daemon.json
> {
>         \"insecure-registries\" : [ \"172.30.0.0/16\" ]
> }
> EOF" && sudo systemctl restart docker; }
> # OpenShift Origin up
> [[ ! -d $HOME/.local/etc/openshift ]] && { mkdir -p $HOME/.local/etc/openshift && cd $HOME/.local/etc/openshift; } || { cd $HOME/.local/etc/openshift && oc cluster down; }
> oc cluster up --public-hostname=$1
>
> exit 0
> EOF 

The Lightsail firewall functionality is currently being implemented in Terraform and is not available in Ansible. In the interim, we will create a shell script to open and close ports needed by OpenShift Origin (using the AWS CLI). This script will be run locally via the Playbook during the create and destroy routines:

$ cat << 'EOF' > scripts/firewall.sh && chmod 754 scripts/firewall.sh
> #!/bin/bash
> #
> openshift_ports="53/UDP 443/TCP 1936/TCP 4001/TCP 7001/TCP 8053/UDP 8443/TCP 10250_10259/TCP"
> #
> [[ -z $* || $(echo $* | xargs -n1 | wc -l) -ne 2 || ! ($* =~ $(echo '\<open\>') || $* =~ $(echo '\<close\>')) ]] && { echo "Please pass in the desired action [ open, close ] and instance [ site_myweb ]." && exit 2; }
> #
> instance="$(echo $* | xargs -n1 | sed '/\<open\>/d; /\<close\>/d')"
> [[ -z $instance ]] && { echo "Please double-check the passed in instance." && exit 1; }
> action="$(echo $* | xargs -n1 | grep -v $instance)"
> #
> for port in $openshift_ports; do
>         aws lightsail $action-instance-public-ports --instance $instance --port-info fromPort=$(echo $port | cut -f1 -d_ | cut -f1  -d/),protocol=$(echo $port | cut -f2 -d/),toPort=$(echo $port | cut -f2 -d_ | cut -f1 -d/)
> done
> #
>
> exit 0
> EOF 

Run the Ansible playbook after a few minutes (accept the host key by typing yes and hitting enter when prompted):

$ ansible-playbook -i ../hosts openshift.yml

After a short while, log on to the instance:

$ ssh -i ~/.ssh/myweb ubuntu@<The value of static_public_ip that was reported.  One can also use 'terraform output static_public_ip' to print it again.>

To get an overview of the current project with any identified issues:

$ oc status --suggest

Log on as Admin via CMD Line and switch to the default project:

$ oc login -u system:admin -n default

Logout of the session:

$ oc logout

Please see the Command-Line Walkthrough.

Logout from the host:

$ logout

Log on as Admin via Web Browser (replace <PUBLIC_IP>):

https://<PUBLIC_IP>:8443/console (You will get a Certificate/Site warning due to a mismatch).

Please see the Web Console Walkthrough.

To shut down the OpenShift Origin cluster, destroy the working folder and start anew (you can re-run the playbook normally to reinitialize):

$ ansible-playbook -i ../hosts openshift.yml --tags "destroy"

Tear down what was created by first performing a dry-run to see what will occur:

$ cd ../../terraform/myweb && terraform plan -destroy 

Tear down the instance:

$ terraform destroy -auto-approve

<–

References:
how-to-install-openshift-origin-on-ubuntu-18-04

Source:
ansible_openshift

AWS/Ansible – Provision a Lightsail instance using Infrastructure as Code

Note: This article has been duplicated from the previous article which uses Terraform and has been modified for Ansible.

AWS has introduced Lightsail to compete with Digital Ocean, Linode, etc. for an inexpensive VPS (Virtual Private Server) offering.

In this article we will use Ansible (Infrastructure as Code) to swiftly bring up an AWS Lightsail instance in us-east-1 on a dynamic IP and install docker/docker-compose on it.

We will use ‘myweb’ as an example in this article, using the same base path of ‘dev’ that was previously created, the container-admin group and using ~/.local/bin|lib for the binaries/libraries.

Please use ‘Get started with Lightsail for free’ prior to commencing with this article.

–>
Go in to the dev directory/link located within your home directory:

$ cd ~/dev

Upgrade the AWS CLI on your host:

$ pip3 install awscli --upgrade --user && chmod 754 ~/.local/bin/aws

Install/Upgrade Ansible:

$ pip3 install ansible --upgrade --user && chmod 754 ~/.local/bin/ansible ~/.local/bin/ansible-playbook

Install/Upgrade Boto3:

$ pip3 install boto3 --upgrade --user

Create a work folder and change in to it:

$ mkdir -p ansible/myweb/scripts && cd ansible/myweb

Add an IAM Policy to the container-admin group so it will have access to the Lightsail API:
AWS UI Console -> Services -> Security, Identity, & Compliance -> IAM -> Policies -> Create Policy -> JSON (replace <AWS ACCOUNT ID> in the Resource arn with your Account’s ID (shown under the top right drop-down (of your name) within the My Account page next to the Account Id: under Account Settings)):

{
     "Version": "2012-10-17",
     "Statement": [
         {
             "Effect": "Allow",
             "Action": [
                 "lightsail:GetRelationalDatabaseEvents",
                 "lightsail:GetActiveNames",
                 "lightsail:GetOperations",
                 "lightsail:GetBlueprints",
                 "lightsail:GetRelationalDatabaseMasterUserPassword",
                 "lightsail:ExportSnapshot",
                 "lightsail:UnpeerVpc",
                 "lightsail:GetRelationalDatabaseLogEvents",
                 "lightsail:GetRelationalDatabaseBlueprints",
                 "lightsail:GetRelationalDatabaseBundles",
                 "lightsail:CopySnapshot",
                 "lightsail:GetRelationalDatabaseMetricData",
                 "lightsail:PeerVpc",
                 "lightsail:IsVpcPeered",
                 "lightsail:UpdateRelationalDatabaseParameters",
                 "lightsail:GetRegions",
                 "lightsail:GetOperation",
                 "lightsail:GetDisks",
                 "lightsail:GetRelationalDatabaseParameters",
                 "lightsail:GetBundles",
                 "lightsail:GetRelationalDatabaseLogStreams",
                 "lightsail:CreateKeyPair",
                 "lightsail:ImportKeyPair",
                 "lightsail:DeleteKeyPair",
                 "lightsail:GetInstance",
                 "lightsail:CreateInstances",
                 "lightsail:DeleteInstance",
                 "lightsail:GetDomains",
                 "lightsail:GetDomain",
                 "lightsail:CreateDomain",
                 "lightsail:DeleteDomain",
                 "lightsail:GetStaticIp",
                 "lightsail:AllocateStaticIp",
                 "lightsail:AttachStaticIp",
                 "lightsail:DetachStaticIp",
                 "lightsail:ReleaseStaticIp"
             ],
             "Resource": "*"         
         },
         {
             "Effect": "Allow",
             "Action": "lightsail:",
             "Resource": [
                 "arn:aws:lightsail::<AWS ACCOUNT ID>:StaticIp/*",
                 "arn:aws:lightsail::<AWS ACCOUNT ID>:ExportSnapshotRecord/*",
                 "arn:aws:lightsail::<AWS ACCOUNT ID>:Instance/*",
                 "arn:aws:lightsail::<AWS ACCOUNT ID>:CloudFormationStackRecord/*",
                 "arn:aws:lightsail::<AWS ACCOUNT ID>:RelationalDatabaseSnapshot/*",
                 "arn:aws:lightsail::<AWS ACCOUNT ID>:RelationalDatabase/*",
                 "arn:aws:lightsail::<AWS ACCOUNT ID>:InstanceSnapshot/*",
                 "arn:aws:lightsail::<AWS ACCOUNT ID>:Domain/*",
                 "arn:aws:lightsail::<AWS ACCOUNT ID>:LoadBalancer/*",
                 "arn:aws:lightsail::<AWS ACCOUNT ID>:KeyPair/*",
                 "arn:aws:lightsail::<AWS ACCOUNT ID>:Disk/*"
             ]
         }
     ]
 }

Review Policy ->

Name: AllowLightsail
Description: Allow access to Lightsail.

Create Policy.

Groups -> container-admin -> Attach Policy -> Search for AllowLightsail -> Attach Policy.

Generate an SSH Key Pair (no password) and restrict permissions on it:

$ ssh-keygen -q -t rsa -b 2048 -N '' -f ~/.ssh/myweb && chmod 400 ~/.ssh/myweb

Import the public key to Lightsail:

$ aws lightsail import-key-pair --key-pair-name myweb --public-key-base64 file://~/.ssh/myweb.pub

Create a hosts file and specify localhost:

$ cat << 'EOF' > hosts
> [local]
> localhost
> EOF

Create a micro instance based off of Ubuntu 18_04 and reference an extraneous file for user_data (run once script on Virtual Machine boot):

$ cat << 'EOF' > lightsail.yml
> # Create a new AWS Lightsail instance, register the instance details and add a way to destroy it
> ---
> - hosts: local
>   connection: local
>
>   tasks:
>     - lightsail:
>         state: present
>         name: myweb.com
>         region: us-east-1
>         zone: us-east-1a
>         blueprint_id: ubuntu_18_04
>         bundle_id: micro_2_0
>         key_pair_name: myweb
>         user_data: "{{ lookup('file', './scripts/install.sh') }}"
>         wait_timeout: 500
>       register: myweb
>
>     - debug: msg="Public IP is {{ myweb.instance.public_ip_address }} for {{ myweb.instance.name }}"
>       when: myweb.instance.public_ip_address is defined
>
>     - debug: msg="Run this playbook for {{ myweb.instance.name }} shortly to list the Public IP."
>       when: myweb.instance.public_ip_address is not defined
>
>     - lightsail:
>         state: absent
>         name: myweb.com
>         region: us-east-1
>       tags: [ 'never', 'destroy' ]
> EOF

Create the shell script for user_data:

$ cat << 'EOF' > scripts/install.sh
> #!/bin/bash
>
> MY_HOME="/home/ubuntu"
> export DEBIAN_FRONTEND=noninteractive
>
> # Install prereqs
> apt update
> apt install -y python3-pip apt-transport-https ca-certificates curl software-properties-common
> # Install docker
> curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
> add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
> apt update
> apt install -y docker-ce
> # Install docker-compose
> su ubuntu -c "mkdir -p $MY_HOME/.local/bin"
> su ubuntu -c "pip3 install docker-compose --upgrade --user && chmod 754 $MY_HOME/.local/bin/docker-compose"
> usermod -aG docker ubuntu
> # Add PATH
> printf "\nexport PATH=\$PATH:$MY_HOME/.local/bin\n" >> $MY_HOME/.bashrc
>
> exit 0
> EOF

Run the playbook:

$ ansible-playbook -i hosts lightsail.yml

Log on to the instance (up to ~30 seconds may be needed for the attachment of the dynamic IP to the instance):

$ ssh -i ~/.ssh/myweb ubuntu@<The value of public_ip_address that was reported.  One can also re-run the playbook to print it again.>

Type yes and hit enter to accept.

On the host (a short while is needed for the run-once script to complete):

$ docker --version
$ docker-compose --version
$ logout

Tear down the instance:

$ ansible-playbook -i hosts lightsail.yml --tags "destroy"

<–

References:

Source:
ansible_myweb

AWS/Terraform – Provision a Lightsail instance using Infrastructure as Code

AWS has introduced Lightsail to compete with Digital Ocean, Linode, etc. for an inexpensive VPS (Virtual Private Server) offering.

In this article we will use Terraform (Infrastructure as Code) to swiftly bring up an AWS Lightsail instance in us-east-1 on a static IP, add a DNS Zone for the site in mention and install docker/docker-compose on it.

We will use ‘myweb’ as an example in this article, using the same base path of ‘dev’ that was previously created, the container-admin group and using ~/.local/bin for the binary.

Please use ‘Get started with Lightsail for free’ prior to commencing with this article.

–>
Go in to the dev directory/link located within your home directory:

$ cd ~/dev

Upgrade the AWS CLI on your host:

$ pip3 install awscli --upgrade --user && chmod 754 ~/.local/bin/aws

Grab Terraform:

$ wget https://releases.hashicorp.com/terraform/0.12.9/terraform_0.12.9_linux_amd64.zip

Install Unzip if you do not have it installed:

$ sudo apt update && sudo apt -y install unzip

Unzip it to ~/.local/bin and set permissions accordingly on it:

$ unzip terraform_0.12.9_linux_amd64.zip -d ~/.local/bin && chmod 754 ~/.local/bin/terraform

Create a work folder and change in to it:

$ mkdir -p terraform/myweb/scripts && cd terraform/myweb

Add an IAM Policy to the container-admin group so it will have access to the Lightsail API:
AWS UI Console -> Services -> Security, Identity, & Compliance -> IAM -> Policies -> Create Policy -> JSON (replace <AWS ACCOUNT ID> in the Resource arn with your Account’s ID (shown under the top right drop-down (of your name) within the My Account page next to the Account Id: under Account Settings)):

{
     "Version": "2012-10-17",
     "Statement": [
         {
             "Effect": "Allow",
             "Action": [
                 "lightsail:GetRelationalDatabaseEvents",
                 "lightsail:GetActiveNames",
                 "lightsail:GetOperations",
                 "lightsail:GetBlueprints",
                 "lightsail:GetRelationalDatabaseMasterUserPassword",
                 "lightsail:ExportSnapshot",
                 "lightsail:UnpeerVpc",
                 "lightsail:GetRelationalDatabaseLogEvents",
                 "lightsail:GetRelationalDatabaseBlueprints",
                 "lightsail:GetRelationalDatabaseBundles",
                 "lightsail:CopySnapshot",
                 "lightsail:GetRelationalDatabaseMetricData",
                 "lightsail:PeerVpc",
                 "lightsail:IsVpcPeered",
                 "lightsail:UpdateRelationalDatabaseParameters",
                 "lightsail:GetRegions",
                 "lightsail:GetOperation",
                 "lightsail:GetDisks",
                 "lightsail:GetRelationalDatabaseParameters",
                 "lightsail:GetBundles",
                 "lightsail:GetRelationalDatabaseLogStreams",
                 "lightsail:CreateKeyPair",
                 "lightsail:ImportKeyPair",
                 "lightsail:DeleteKeyPair",
                 "lightsail:GetInstance",
                 "lightsail:CreateInstances",
                 "lightsail:DeleteInstance",
                 "lightsail:GetDomains",
                 "lightsail:GetDomain",
                 "lightsail:CreateDomain",
                 "lightsail:DeleteDomain",
                 "lightsail:GetStaticIp",
                 "lightsail:AllocateStaticIp",
                 "lightsail:AttachStaticIp",
                 "lightsail:DetachStaticIp",
                 "lightsail:ReleaseStaticIp"
             ],
             "Resource": "*"         
         },
         {
             "Effect": "Allow",
             "Action": "lightsail:",
             "Resource": [
                 "arn:aws:lightsail::<AWS ACCOUNT ID>:StaticIp/*",
                 "arn:aws:lightsail::<AWS ACCOUNT ID>:ExportSnapshotRecord/*",
                 "arn:aws:lightsail::<AWS ACCOUNT ID>:Instance/*",
                 "arn:aws:lightsail::<AWS ACCOUNT ID>:CloudFormationStackRecord/*",
                 "arn:aws:lightsail::<AWS ACCOUNT ID>:RelationalDatabaseSnapshot/*",
                 "arn:aws:lightsail::<AWS ACCOUNT ID>:RelationalDatabase/*",
                 "arn:aws:lightsail::<AWS ACCOUNT ID>:InstanceSnapshot/*",
                 "arn:aws:lightsail::<AWS ACCOUNT ID>:Domain/*",
                 "arn:aws:lightsail::<AWS ACCOUNT ID>:LoadBalancer/*",
                 "arn:aws:lightsail::<AWS ACCOUNT ID>:KeyPair/*",
                 "arn:aws:lightsail::<AWS ACCOUNT ID>:Disk/*"
             ]
         }
     ]
 }

Review Policy ->

Name: AllowLightsail
Description: Allow access to Lightsail.

Create Policy.

Groups -> container-admin -> Attach Policy -> Search for AllowLightsail -> Attach Policy.

Generate an SSH Key Pair (no password) and restrict permissions on it:

$ ssh-keygen -q -t rsa -b 2048 -N '' -f ~/.ssh/myweb && chmod 400 ~/.ssh/myweb

Import the public key to Lightsail:

$ aws lightsail import-key-pair --key-pair-name myweb --public-key-base64 file://~/.ssh/myweb.pub

Set the version to greater then or equal to 2.0, interpolate the region and use the AWS CLI credentials file:

$ cat << 'EOF' > provider.tf
> provider "aws" {
>   version                 = ">= 2.0"
>
>   region                  = "${var.region}"
>   shared_credentials_file = "~/.aws/credentials"
>   profile                 = "default"
> }
> EOF

Set the default region as a variable:

$ cat << 'EOF' > vars.tf
> variable "region" {
>   default = "us-east-1"
> }
> EOF

Create a Lightsail DNS Zone of myweb.com, allocate a static IP for it, create a micro instance based off of Ubuntu 18_04, reference an extraneous file for user_data (run once script on Virtual Machine boot), tag it and attach the allocated Static IP to it:

$ cat << 'EOF' > lightsail.tf
> resource "aws_lightsail_domain" "myweb" {
>   domain_name = "myweb.com"
> }
>
> resource "aws_lightsail_static_ip" "myweb" {
>   name = "static-ip_myweb"
> }
>
> resource "aws_lightsail_instance" "myweb" {
>   name                    = "site_myweb"
>   availability_zone       = "${var.region}a"
>   blueprint_id            = "ubuntu_18_04"
>   bundle_id               = "micro_2_0"
>   key_pair_name           = "myweb"
>   user_data               = "${data.template_file.init_script.rendered}"
>
>   tags = {
>         Site = "myweb.com"
>     }
> }
>
> resource "aws_lightsail_static_ip_attachment" "myweb" {
>   static_ip_name = "${aws_lightsail_static_ip.myweb.name}"
>   instance_name  = "${aws_lightsail_instance.myweb.name}"
> }
> EOF

Output our allocated and attached static IP after creation:

$ cat << 'EOF' > output.tf
> output "static_public_ip" {
>   value = "${aws_lightsail_static_ip.myweb.ip_address}"
> }
> EOF

Create a template file to reference a run-once on boot user_data script:

$ cat << 'EOF' > install.tf
> data "template_file" "init_script" {
>   template = "${file("scripts/install.sh")}"
> }
> EOF

Create the shell script for user_data:

$ cat << 'EOF' > scripts/install.sh
> #!/bin/bash
>
> MY_HOME="/home/ubuntu"
> export DEBIAN_FRONTEND=noninteractive
>
> # Install prereqs
> apt update
> apt install -y python3-pip apt-transport-https ca-certificates curl software-properties-common
> # Install docker
> curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
> add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
> apt update
> apt install -y docker-ce
> # Install docker-compose
> su ubuntu -c "mkdir -p $MY_HOME/.local/bin"
> su ubuntu -c "pip3 install docker-compose --upgrade --user && chmod 754 $MY_HOME/.local/bin/docker-compose"
> usermod -aG docker ubuntu
> # Add PATH
> printf "\nexport PATH=\$PATH:$MY_HOME/.local/bin\n" >> $MY_HOME/.bashrc
>
> exit 0
> EOF

Initialize the directory:

$ terraform init

Run a dry-run to see what will occur:

$ terraform plan

Provision:

$ terraform apply -auto-approve

Log on to the instance (up to ~30 seconds may be needed for the attachment of the static IP to the instance):

$ ssh -i ~/.ssh/myweb ubuntu@<The value of static_public_ip that was reported.  One can also use 'terraform output static_public_ip' to print it again.>

Type yes and hit enter to accept.

On the host (a short while is needed for the run-once script to complete):

$ docker --version
$ docker-compose --version
$ logout

Tear down what was created by first performing a dry-run to see what will occur:

$ terraform plan -destroy 

Tear down the instance:

$ terraform destroy -auto-approve

<–

References:

Source:
terraform_aws_myweb

Docker – Launch a Containerized Web Server and a PHP Processor

In this article we will use docker compose to spin up two containers. One will run a web server (Nginx) and one will run php-fpm (Hypertext Preprocessor FastCGI Process Manager). All requests for .php will be forwarded on to the php-fpm service for processing which is running in a separate container and be presented by Nginx. We will use images based off of Alpine which is a minimal Linux environment with a small footprint.

Please go through aws-azure-development-environment-setup-for-containerization prior to commencing with this article.

–>
Change directory in to the dev/docker path located within your home directory:

$ cd ~/dev/docker

Create a directory called myweb and go in to it:

$ mkdir myweb && cd myweb

Create five directories called nginx, nginx/conf, php-fpm, src and src/web:

$ mkdir -p nginx/conf php-fpm -p src/web

Create a docker-compose.yml file like so:

$ cat << 'EOF' > docker-compose.yml
> version: '3'
>
> services:
>     nginx:
>         image: myweb-nginx:v1
>         build: ./nginx
>         volumes:
>             - ./src/web:/var/www/html
>         ports:
>             - 80:80
>         depends_on:
>             - php-fpm
>
>     php-fpm:
>         image: myweb-php-fpm:v1
>         build: ./php-fpm
>         volumes:
>             - ./src/web:/var/www/html
> EOF

This sets the version to 3 for docker-compose and sets up two services called nginx and php-fpm. It will build what is inside each folder, overlay src/web in to them (so you can work independently on the content without rebuilding the container) and map port 80 (HTTP) to the inside HTTP/80 port. Nginx requires php-fpm so you denote that with a depends_on; this service will start first. The ‘image:’ will name and tag this accordingly on the docker images.

Create a Dockerfile inside nginx:

$ cat << 'EOF' > nginx/Dockerfile
> FROM nginx:stable-alpine
> COPY ./conf/default.conf /etc/nginx/conf.d
> EOF

Create a default.conf inside nginx/conf:

$ cat << 'EOF' > nginx/conf/default.conf
> server {
>    index index.php;
>    server_name "";
>    error_log /var/log/nginx/error.log;
>    access_log /var/log/nginx/access.log;
>    root /var/www/html;
>
>    location ~\.php$ {
>        try_files $uri =404;
>        fastcgi_split_path_info ^(.+.php)(/.+)$;
>        fastcgi_pass php-fpm:9000;
>        fastcgi_index index.php;
>        include fastcgi_params;
>        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
>        fastcgi_param PATH_INFO $fastcgi_path_info;
>    }
> }
> EOF

This will pull the stable Nginx docker image based off of Alpine and copy our default.conf in to the container. The default.conf sets up the web server to forward .php over to our container which is called ‘php-fpm’ on the default port of 9000 and sets the default index page as index.php.

Create a Dockerfile inside php-fpm:

$ cat << 'EOF' > php-fpm/Dockerfile
> FROM php:7.3-fpm-alpine
> EOF

This will pull the php-fpm 7.3 docker image based off of Alpine.

Create a index.php in src/web:

$ cat << 'EOF' > src/web/index.php
> <html>
>  <head>
>   <title>Hello World - PHP Test</title>
>  </head>
>  <body>
>   <?php echo '<p>Hello World</p>'; ?>
>  </body>
> </html>
> EOF

Now build both containers/launch both services and send in to non-interactive mode:

$ docker-compose up -d

Now use curl to make a request:

$ curl http://localhost

You should see:

<html>
 <head>
  <title>Hello World - PHP Test</title>
 </head>
 <body>
  <p>Hello World</p>
 </body>
</html>

You can see the containers running by typing:

$ docker ps

You can list the containers:

$ docker container ls

You can stop the containers:

$ docker-compose stop

You can remove the containers:

$ docker-compose down

You can list the images:

$ docker images

<–

References:

Source:
docker_myweb

Azure – Configure the CLI in the Development Environment for Containerization

In this article we will setup the Azure CLI to interact with AWS AKS (Azure Kubernetes Service) and Azure ACR (Azure Container Registry)).

–>
UI Console -> Azure Active Directory
App registrations -> New registration -> * Name: container-admin -> Supported account types: Accounts in this organizational directory only -> Register

container-admin -> Certificates & secrets -> New client secret -> Description: azure-cli ; Expires: In 1 year -> Add -> Take note of the <Client secret value>

Search for (top left): Subscriptions -> Click your subscription -> Access control (IAM) -> Add -> Add role assignment -> Role: Azure Kubernetes Service Cluster Admin Role -> Assign access to: Azure AD user, group, or service principal -> Select: (search for) container-admin -> Save

Azure Active Directory -> App registrations -> container-admin -> Overview -> Take note of the <Application (client) ID> and <Directory (tenant) ID>

 $ [[ ! -d ~/.azure ]] && mkdir ~/.azure

(Replace <Application (client) ID>, <Client secret value> and <Directory (tenant) ID> below):

$ cat << "EOF" > ~/.azure/credentials
> [container-admin]
> application_id=<Application (client) ID>
> client_secret=<Client secret value>
> directory_id=<Directory (tenant) ID>
> EOF
$ chmod o-rw,g-w ~/.azure/credentials

Add a function in to your startup to parse the above file for pertinent login information as a servicePrincipal and run az login in a sub-shell when you execute it (ensure you get pertinent output regarding the Subscription):

$ cat << 'EOF' >> ~/.bashrc
>
> function az-login-sp() {
>         (export $(grep -v '^\[' $HOME/.azure/credentials | xargs) && az login --service-principal -u $application_id -p $client_secret --tenant $directory_id)
> }
> EOF
$ . ~/.bashrc
$ az-login-sp

Ensure you get appropriate output (the value will be []):

$ az aks list

Ensure you get appropriate output (the value will be []):

$ az acr list

<–

References:

AWS – Configure the CLI in the Development Environment for Containerization

In this article we will setup the AWS CLI to interact with AWS EKS (Elastic Kubernetes Service) and AWS ECR (Elastic Container Registry)).

–>
UI Console -> Find Services -> IAM (Manage User Access and Encryption Keys)

Users -> Add User -> aws-cli -> Access type* -> Select Programmatic Access -> Next: Permissions -> Set Permissions: Add user to group -> Create group -> Group name: container-admin -> Create group-> Next: Tags -> Key: Name ; Value: Container Admin -> Next: Review -> Create user -> Download .csv or take note of the Access key ID and Secret access key (click Show to uncover) -> Close

Policies -> Create policy -> JSON (tab) -> Copy and paste the below in to the provided box (replace <AWS ACCOUNT ID> in the Resource arn with your Account’s ID (shown under the top right drop-down (of your name) within the My Account page next to the Account Id: under Account Settings)):

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "iam:CreateInstanceProfile",
                "iam:DeleteInstanceProfile",
                "iam:GetRole",
                "iam:GetInstanceProfile",
                "iam:RemoveRoleFromInstanceProfile",
                "iam:CreateRole",
                "iam:DeleteRole",
                "iam:AttachRolePolicy",
                "iam:PutRolePolicy",
                "iam:ListInstanceProfiles",
                "iam:AddRoleToInstanceProfile",
                "iam:ListInstanceProfilesForRole",
                "iam:PassRole",
                "iam:DetachRolePolicy",
                "iam:DeleteRolePolicy",
                "iam:GetRolePolicy",
                "iam:DeleteServiceLinkedRole",
                "iam:CreateServiceLinkedRole"
            ],
            "Resource": [
                "arn:aws:iam::<AWS ACCOUNT ID>:role/aws-service-role/autoscaling.amazonaws.com/AWSServiceRoleForAutoScaling",
                "arn:aws:iam::<AWS ACCOUNT ID>:instance-profile/eksctl-*",
                "arn:aws:iam::<AWS ACCOUNT ID>:role/eksctl-*"
            ]
        },
        {
            "Effect": "Allow",
            "Action": "cloudformation:*",
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "eks:*"
            ],
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "autoscaling:DescribeAutoScalingGroups",
                "autoscaling:DescribeLaunchConfigurations",
                "autoscaling:DescribeScalingActivities",
                "autoscaling:CreateLaunchConfiguration",
                "autoscaling:DeleteLaunchConfiguration",
                "autoscaling:UpdateAutoScalingGroup",
                "autoscaling:DeleteAutoScalingGroup",
                "autoscaling:CreateAutoScalingGroup"
            ],
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": "ec2:DeleteInternetGateway",
            "Resource": "arn:aws:ec2:*:*:internet-gateway/*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "ec2:AuthorizeSecurityGroupIngress",
                "ec2:DeleteSubnet",
                "ec2:DeleteTags",
                "ec2:CreateNatGateway",
                "ec2:CreateVpc",
                "ec2:AttachInternetGateway",
                "ec2:DescribeVpcAttribute",
                "ec2:DeleteRouteTable",
                "ec2:AssociateRouteTable",
                "ec2:DescribeInternetGateways",
                "ec2:CreateRoute",
                "ec2:CreateInternetGateway",
                "ec2:RevokeSecurityGroupEgress",
                "ec2:CreateSecurityGroup",
                "ec2:ModifyVpcAttribute",
                "ec2:DeleteInternetGateway",
                "ec2:DescribeRouteTables",
                "ec2:ReleaseAddress",
                "ec2:AuthorizeSecurityGroupEgress",
                "ec2:DescribeTags",
                "ec2:CreateTags",
                "ec2:DeleteRoute",
                "ec2:CreateRouteTable",
                "ec2:DetachInternetGateway",
                "ec2:DescribeNatGateways",
                "ec2:DisassociateRouteTable",
                "ec2:AllocateAddress",
                "ec2:DescribeSecurityGroups",
                "ec2:RevokeSecurityGroupIngress",
                "ec2:DeleteSecurityGroup",
                "ec2:DeleteNatGateway",
                "ec2:DeleteVpc",
                "ec2:CreateSubnet",
                "ec2:DescribeSubnets",
                "ec2:DescribeAvailabilityZones",
                "ec2:DescribeImages",
                "ec2:describeAddresses",
                "ec2:DescribeVpcs",
                "ec2:CreateLaunchTemplate",
                "ec2:DescribeLaunchTemplates",
                "ec2:RunInstances",
                "ec2:DeleteLaunchTemplate",
                "ec2:DescribeLaunchTemplateVersions",
                "ec2:DescribeImageAttribute",
                "ec2:DescribeKeyPairs",
                "ec2:ImportKeyPair"
            ],
            "Resource": "*"
        }
    ]
} 

Review policy -> Name*: AllowEKS -> Description: Allows access to EKS and related. -> Create policy

Create policy -> JSON (tab) -> Copy and paste the below in to the provided box:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Resource": "*",
            "Action": [
                "ecr:GetAuthorizationToken",
                "ecr:GetDownloadUrlForLayer",
                "ecr:BatchGetImage",
                "ecr:BatchCheckLayerAvailability",
                "ecr:PutImage",
                "ecr:InitiateLayerUpload",
                "ecr:UploadLayerPart",
                "ecr:CompleteLayerUpload",
                "ecr:DescribeRepositories",
                "ecr:GetRepositoryPolicy",
                "ecr:ListImages",
                "ecr:DescribeImages",
                "ecr:CreateRepository",
                "ecr:DeleteRepository",
                "ecr:BatchDeleteImage",
                "ecr:SetRepositoryPolicy",
                "ecr:DeleteRepositoryPolicy"
            ]
        }
    ]
}

Review policy -> Name*: AllowECR -> Description: Allows access to ECR. -> Create policy

Groups -> Click on container-admin -> Permissions (tab)
Attach Policy -> Search for AllowEKS in Filter: -> Select -> Attach Policy
Attach Policy -> Search for AllowECR in Filter: -> Select -> Attach Policy

$ mkdir ~/.aws

(Replace <AWS ACCESS KEY>/<AWS SECRET KEY> with the values given after the username creation and <AWS DEFAULT REGION> with your default region that you would like to execute in (i.e. us-east-1, us-west-2, etc.), where applicable):

$ cat << EOF > ~/.aws/credentials
> [default]
> aws_access_key_id=<AWS ACCESS KEY>
> aws_secret_access_key=<AWS SECRET KEY>
> EOF
$ chmod o-rw,g-w ~/.aws/credentials
$ cat << EOF > ~/.aws/config
> [default]
> region=<AWS DEFAULT REGION>
> EOF
$ chmod og-w ~/.aws/config

Ensure you get appropriate output (the value will be [] for “clusters”:):

$ aws eks list-clusters

Ensure you get appropriate output (the value will be [] for “repositories”:):

$ aws ecr describe-repositories

<–

References:

AWS/Azure – Development Environment Setup for Containerization

In this article we will setup our development environment to utilize containerization technology in future series/demos.

Technology that we will work with are Kubernetes (AWS EKS (Elastic Kubernetes Service)/Azure AKS (Azure Kubernetes Service) ) and Docker (AWS ECR (Elastic Container Registry)/Azure ACR (Azure Container Registry) ).

Kubernetes allows for management of containers to provide scalability, high availability and fault tolerance among others. Containers are isolated packages/images, that contain all that is needed to run an application. They are lightweight, portable and ensures runtime is consistent.

–>
The below was written using Ubuntu 18.04.2-LTS Desktop (minimal) and Windows Subsystem (Windows 10 Insider Preview; 18950.1000 (Fast Ring); 18362.267 (May 2019 Update)) for Linux (WSL2 and WSL1; Ubuntu). It should be applicable to Debian, Ubuntu Docker (lsb-release package has been left in to accommodate) and will be applicable to any Fast Ring Windows 10 build >= 18917.

Note: Any build lower then 18917, should use Docker Desktop and use the .exe commands in WSL via named pipes.

If you are using Windows 10 Home or any non Windows based 10 build, then please use Vagrant (non-Windows 10) to spin up a Virtual Machine or Docker Toolbox (Windows 10 Home in conjunction with WSL).
<–

–>
Windows 10:
Start -> Control Panel -> Programs and Features -> Turn Windows features on or off -> Select Windows Subsystem for Linux -> ok -> Restart now.

Open a Browser -> WSL Store -> Open Microsoft Store -> Ubuntu -> Get -> Launch or Start -> Search for programs and files -> Ubuntu (with the colored circular icon and hit enter) -> Installing, this may take a few minutes… -> Enter new UNIX username: -> Enter new UNIX password: -> Retype new UNIX password: -> Right click the command window (on the toolbar) -> Options -> Quick Edit Mode -> ok
<–

–>
Not for Windows 10 builds < 18917

$ logout

Start -> Search for powershell -> right click -> Run as administrator and select Yes to the Elevated UAC prompt)
PS > Enable-WindowsOptionalFeature -Online -FeatureName VirtualMachinePlatform

Select Reboot (default) when prompted.

Convert Ubuntu to WSL2:
Start -> Run: powershell

To see what Distros are installed, state and version:

> wsl -l -v 

Set WSL2 on the distribution:

> wsl --set-version Ubuntu 2 

Note: This will take a while.

After it is finished, run the command, 2 lines above (which was used to list) to see that it is now Version 2.

Start -> Search for programs and files -> Ubuntu (with the colored circular icon and hit enter)
<–

Install some prerequisites:
Note: software-properties-common isn’t needed if not installing docker (Windows 10 (WSL) builds < 18917).

$ sudo apt update
$ sudo apt install -y python3-pip apt-transport-https ca-certificates curl gnupg2 lsb-release software-properties-common

–>
Restart services during package upgrades without asking? -> Tab to <Yes> and hit Enter.
<–

Install AWS CLI:

$ pip3 install awscli --upgrade --user && chmod 754 ~/.local/bin/aws

Install EKSCTL (weaveworks):

$ curl --silent --location "https://github.com/weaveworks/eksctl/releases/download/latest_release/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C ~/.local/bin && chmod 754 ~/.local/bin/eksctl

Install KUBECTL:

$ curl -o ~/.local/bin/kubectl --silent -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl && chmod 754 ~/.local/bin/kubectl

Install Docker:

–>
Not for Windows 10 (WSL) < 18917.

$ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
$ sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
$ sudo apt update
$ sudo apt install -y docker-ce

Show the status of docker (it should be enabled and running; if not then perform an ‘enable’ and ‘start’ (without the ‘| grep …’):

–>
Not for Windows 10 (WSL).

$ systemctl status docker | grep -i "loaded\|active"

<–

–>
For Windows 10 (WSL (>=18917)).

Start and show the status of docker:

$ sudo service docker start
$ sudo service docker status

It should come up in to a running state.

Note: This will stay running as a background task when WSL is closed, however when the host is rebooted/user session is logged out of, it will need to be launched again once opening WSL. This section will be updated on how to have this start when the host boots at a later time.
<–

Install Docker Compose:

–>
Not for Windows 10 (WSL) < 18917.

$ pip3 install docker-compose --upgrade --user && chmod 754 ~/.local/bin/docker-compose

Allow your user the right to execute docker without sudo:

$ sudo usermod -aG docker $USER
$ logout

Log back in to a shell.

Create a work folder, inside your home directory:

$ mkdir -p dev/docker

<–

–>
For Windows 10 builds < 18917

Start -> Control Panel -> Programs and Features -> Turn Windows features on or off -> Select Hyper-V and Containers -> ok -> Restart now.

Install Docker Desktop -> Leave Defaults -> Log out and back in -> System Tray should show it Launching to a Run state.

Click System Tray -> Settings -> Shared Drives -> Select C -> Apply -> Type in your Windows user’s password to confirm -> close out of settings

Launch the WSL shell.

Create a work folder inside the Windows mount, symbolic link it within your home directory inside WSL and alias the .exe with the bare commands for docker/docker-compose and add for permanence:

$ mkdir -p /mnt/c/dev/docker
$ ln -s /mnt/c/dev dev
$ cat << EOF >> ~/.bashrc
>
> alias docker=docker.exe
> alias docker-compose=docker-compose.exe
> EOF
$ alias docker=docker.exe
$ alias docker-compose=docker-compose.exe

<–

Install Azure CLI:

$ curl -sL https://packages.microsoft.com/keys/microsoft.asc | gpg --dearmor | sudo tee /etc/apt/trusted.gpg.d/microsoft.asc.gpg 1>/dev/null
$ echo "deb [arch=amd64] https://packages.microsoft.com/repos/azure-cli/ $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/azure-cli.list
$ sudo apt update
$ sudo apt install -y azure-cli

Add .local/bin located in your HOME Directory in to your PATH for permanence:

$ cat << 'EOF' >> ~/.bashrc
>
> export PATH=$PATH:~/.local/bin
> EOF
$ export PATH=$PATH:~/.local/bin 

Ensure version output is shown:

$ aws --version
$ eksctl version
$ kubectl version 2>/dev/null
$ docker --version
$ docker-compose --version
$ az --version

References:

Source:
home_pershoot

Overview – DEVOPS?!

DEVOPS?!

While not a new concept, it being labelled as such are recent and the methodologies have been adjusted/updated as well.

Perhaps you have a background such as myself, coming from the old-world where one was performing these responsibilities using custom shell scripts, proprietary Content Management/Versioning Systems, raw shared mounts to promote/copy to stages, deploying templated systems on to server farms by hand, etc. These days, there are a lot of tools and systems available to streamline these processes.

Gone are the days of having rigid compartmentalization and what has risen are Developers performing Operations and Operations performing Development. This creates a cohesive unit to get things prim and proper at an accelerated rate, before landing in to Production. Attaining minimalism is a goal to strive for.

In this section I take you through an up-skill and assimilation in to ‘modern’ Development Operations as it pertains to the Cloud platform/framework.