ChatGPT/Ansible – Send queries and receive responses using Infrastructure as Code

ChatGPT is an AI-powered chatbot designed to provide natural language generations and follow-up questions to enable users to have natural, free-flowing conversations. It is powered by OpenAI‘s GPT-3 AI language model, and its goal is to enable people to have natural conversations with AI-driven chatbots.

The above was written using ChatGPT.

In this article we will use Ansible (Infrastructure as Code) to query ChatGPT and receive responses. We will use Elchico2007‘s collection and OpenAI‘s module to accomplish this.

We will use the same base path of ‘dev’ that was previously created, and use ~/.local/bin for certain binaries.

Please Sign up to OpenAI’s ChatGPT here.

–>
Go in to the dev directory/link located within your home directory:

$ cd ~/dev

Install/Upgrade Ansible:

$ pip3 install ansible --upgrade --user && chmod 754 ~/.local/bin/ansible ~/.local/bin/ansible-playbook ~/.local/bin/ansible-galaxy

Install/Upgrade OpenAI’s module:

$ pip3 install openai --upgrade --user

Install/Upgrade JMESPath (so we may use json_query to parse output):

$ pip3 install jmespath --upgrade --user

Create a Ansible work folder and change in to the base path:

$ mkdir -p ansible/chatgpt/inventory && cd ansible/chatgpt

Create an Ansible configuration where we will target the collections install, in to this space:

$ cat << 'EOF' > ansible.cfg
> [defaults]
> collections_paths = ./collections
> EOF

Install Elchico2007’s ChatGPT collection:

$ ansible-galaxy collection install elchico2007.chatgpt

Create an Ansible inventory, which adds a local group, lists your local host and specifies the connection to be local:

$ cat << 'EOF' > inventory/static-hostname
> [local]
> localhost ansible_connection=local
> EOF

Create an Ansible playbook, which will query ChatGPT and print the response from it:

$ cat << 'EOF' > chatgpt.yml
> # Query ChatGPT and receive responses
> ---
> - hosts: local
> 
>   tasks:
>     - name: Query ChatGPT
>       elchico2007.chatgpt.gpt3:
>         api_key: "{{ lookup('env', 'CHATGPT_API_KEY', default='') }}"
>         model: "{{ lang_model | d('text-davinci-003', true) }}"
>         input: "{{ chatgpt_query | d('What is ChatGPT?', true) }}"
>         instruction: "{{ perform_action | d('', true) }}"
>       register: chatgpt
>
>     - name: Output ChatGPT's response
>       debug:
>         msg: "{{ chatgpt.output | json_query('choices[].text') }}"
>       when: chatgpt
> EOF

‘Create a new secret key’ here and take note of it.

Run the default query (replace <API key> with the API key you received from ‘Create a new secret key’):

$ CHATGPT_API_KEY=<API key> ansible-playbook -i inventory/ chatgpt.yml

It should return similarly:

TASK [Output ChatGPT's response] ***************************************************************************************
ok: [localhost] => {
    "msg": [
        "ChatGPT is an AI-powered chatbot designed to provide natural language generations and follow-up questions to enable users to have natural, free-flowing conversations. It is powered by OpenAI's GPT-3 AI language model, and its goal is to enable people to have natural conversations with AI-driven chatbots."
    ]
}

Ask it a question (replace <API key> with the API key you received from ‘Create a new secret key’):

$ CHATGPT_API_KEY=<API key> ansible-playbook -i inventory/ chatgpt.yml -e 'chatgpt_query="What is Droid Basement?"'

It should return similarly:

TASK [Output ChatGPT's response] ***************************************************************************************
ok: [localhost] => {
    "msg": [
        "Droid Basement is an Android enthusiast blog founded in 2012. It provides users with the latest news, reviews and information on Android devices, applications, and accessories. The blog includes tutorials and guides, development resources, and other Android-related content."
    ]
}

Prompt it to perform a correction for you (replace <API key> with the API key you received from ‘Create a new secret key’):

$ CHATGPT_API_KEY=<API key> ansible-playbook -i inventory/ chatgpt.yml -e 'lang_model=text-davinci-edit-001 chatgpt_query="I lick teeching." perform_action="Fix my grammar"'

It should return similarly:

TASK [Output ChatGPT's response] ***************************************************************************************
ok: [localhost] => {
    "msg": [
        "I like teaching."
    ]
}

You can set your API key in an environment variable so it (CHATGPT_API_KEY) does not need to be specified when executing ‘ansible-playbook’ (replace <API key> with the API key you received from ‘Create a new secret key’):

$ export CHATGPT_API_KEY="<API key>"

To unset the environment variable:

$ unset CHATGPT_API_KEY

<–

Source:

elchico2007.chatgpt

ChatGPT/Terraform – Send queries and receive responses using Infrastructure as Code

ChatGPT is an AI-powered chatbot developed by OpenAI. It uses natural language processing technology to generate intelligent, personalized responses to user queries in real-time. It combines the power of a neural network with the natural conversational techniques used by real people.

The above was written using ChatGPT.

In this article we will use Terraform (Infrastructure as Code) to query ChatGPT and receive responses. We will use Develeap‘s provider to accomplish this.

We will use the same base path of ‘dev’ that was previously created and use ~/.local/bin for certain binaries.

Please Sign up to OpenAI’s ChatGPT here.

–>
Go in to the dev directory/link located within your home directory:

$ cd ~/dev

Grab/Update to the latest version of Terraform:

$ wget https://releases.hashicorp.com/terraform/1.4.2/terraform_1.4.2_linux_amd64.zip

Install Unzip if you do not have it installed:

$ sudo apt update && sudo apt -y install unzip

Unzip it to ~/.local/bin and set permissions accordingly on it (type y and hit enter to replace if upgrading, at the prompt):

$ unzip terraform_1.4.2_linux_amd64.zip -d ~/.local/bin && chmod 754 ~/.local/bin/terraform

Create a Terraform work folder and change in to the base path:

$ mkdir -p terraform/chatgpt && cd terraform/chatgpt

Pin the Terraform version to greater then or equal to 1.4:

$ cat << 'EOF' > versions.tf
> terraform {
>   required_version = ">= 1.4.0"
> }
> EOF

Set query as a variable and assign it a default value:

$ cat << 'EOF' > vars.tf
> variable "query" {
>   default = "What is ChatGPT?"
> }
> EOF

Add the ChatGPT provider from Develeap:

$ cat << 'EOF' > provider.tf
> terraform {
>   required_providers {
>     chatgpt = {
>       version = "0.0.1"
>       source  = "develeap/chatgpt"
>     }
>   }
> }
>
> provider "chatgpt" {
>   # CHATGPT_API_KEY="<API key>" terraform apply -auto-approve
> }
> EOF

Add the ChatGPT resource:

$ cat << 'EOF' > chatgpt.tf
> resource "chatgpt_prompt" "query" {
>   max_tokens = 256
>   query      = "${var.query}"
> }
> EOF

Output the response to our query:

$ cat << 'EOF' > output.tf
> output "query_result" {
>   value = chatgpt_prompt.query.result
> }
> EOF

‘Create a new secret key’ here and take note of it.

Initialize the Terraform directory:

$ terraform init

Run the default query (replace <API key> with the API key you received from ‘Create a new secret key’):

$ CHATGPT_API_KEY="<API key>" terraform apply -auto-approve

It should return:

Outputs:

query_result = "ChatGPT is an AI-powered chatbot developed by OpenAI. It uses natural language processing technology to generate intelligent, personalized responses to user queries in real-time. It combines the power of a neural network with the natural conversational techniques used by real people."

Ask it a question (replace <API key> with the API key you received from ‘Create a new secret key’):

$ CHATGPT_API_KEY="<API key>" terraform apply -var "query=What is Droid Basement?" -auto-approve

It should return:

Outputs:

query_result = "Droid Basement is a website dedicated to providing Android users with tutorials on rooting, ROMs and other custom development tasks. The site also offers popular downloads, forums, and articles related to Android development."

You can set your API key in an environment variable so it (CHATGPT_API_KEY) does not need to be specified when executing ‘terraform’ (replace <API key> with the API key you received from ‘Create a new secret key’):

$ export CHATGPT_API_KEY="<API key>"

To unset the environment variable:

$ unset CHATGPT_API_KEY

<–

Source:

terraform-provider-chatgpt

Google Cloud/Terraform – Provision a Compute Engine instance using Infrastructure as Code

Note: Some of this is a duplicate of the AWS EC2 article, modified for Google Compute Engine and a recent Terraform 0.14.x version.

Google Compute Engine is the compute service in Google Cloud. It is flexible, adaptable, scalable and is able to run Virtual Machine workloads to fit most every need.

In this article we will use Terraform (Infrastructure as Code) to swiftly bring up a Google Compute Engine instance in us-east-4 on a static IP, in a new VPC, add a DNS Zone for the site in mention and install docker/docker-compose on it.

We will use ‘myweb’ as an example in this article, using the same base path of ‘dev’ that was previously created and using ~/.local/bin for certain binaries and system for others.

Please use Google Cloud Free Tier prior to commencing with this article if you do not already have an account.

Note: The Compute Engine instance used here is one step above F1-Micro (which is free-tier). The reason for not going with the F1-Micro instance, is it doesn’t provide enough resources for the boot up work to complete in a near timely manner.

–>
Go in to the dev directory/link located within your home directory:

$ cd ~/dev

Grab/Update to the latest version of Terraform:

$ wget https://releases.hashicorp.com/terraform/0.14.7/terraform_0.14.7_linux_amd64.zip

Install Unzip if you do not have it installed:

$ sudo apt update && sudo apt -y install unzip

Unzip it to ~/.local/bin and set permissions accordingly on it (type y and hit enter to replace if upgrading, at the prompt):

$ unzip terraform_0.14.7_linux_amd64.zip -d ~/.local/bin && chmod 754 ~/.local/bin/terraform

Install GCLOUD SDK/CLI:

Note: The below instructions will use an Ubuntu distribution and have assumed you have prepared a development environment with pre-requisite packages (apt-transport-https, ca-certificates and gnupg2/gnupg) from the preceding article(s)):

$ echo "deb [signed-by=/usr/share/keyrings/cloud.google.gpg] https://packages.cloud.google.com/apt cloud-sdk main" | sudo tee -a /etc/apt/sources.list.d/google-cloud-sdk.list
$ curl -fsSL https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key --keyring /usr/share/keyrings/cloud.google.gpg add -
$ sudo apt-get update && sudo apt-get -install -y google-cloud-sdk

Initialize/Authenticate/Authorize via GCLOUD CLI:

$ gcloud auth login

You must log in to continue. Would you like to log in (Y/n)? Y

Go to the following link in your browser: https://…..

In browser: Choose an account to continue to Google Cloud SDK

Im browser: Google Cloud SDK wants to access your Google Account -> Allow

Please copy this code, switch to your application and paste it there:

<COPY and PASTE back in to the terminal window>

Enter verification code: <CODE> and hit Enter.

Create a Terraform work folder, a folder for the script(s), a service account folder and change in to the base path:

$ mkdir -p terraform/gcp/myweb/scripts && cd terraform/gcp/myweb && mkdir service-account 

Generate an SSH Key Pair (no password) and restrict permissions on it if you don’t already have one:

$ ssh-keygen -q -t rsa -b 2048 -N '' -f ~/.ssh/myweb && chmod 400 ~/.ssh/myweb

Pin the Terraform version to greater then or equal to 0.14:

$ cat << 'EOF' > versions.tf
> terraform {
>   required_version = ">= 0.14"
> }
> EOF

Set the default region as a variable, set prefix of myweb and create an empty ‘project’ variable:

$ cat << 'EOF' > vars.tf
> variable "region" {
>   default = "us-east4"
> }
>
> variable "prefix" {
>   default = "myweb"
> }
>
> variable "project" {
>   default = ""
> }
> EOF

Configure the google provider and use a service account:

$ cat << 'EOF' > provider.tf
> provider "google" {
>   credentials = file("~/dev/terraform/gcp/myweb/service-account/${var.project}.json")
>   project = var.project
>   region  = var.region
>   zone    = "${var.region}-a"
> }
> EOF

The following is performed with this script/code:

  • create a DNS Zone of myweb.com (no A records will be added)
  • create a Virtual Private Cloud
  • add a subnet of 10.0.1.0/24 within the VPC
  • allocate a static Public IP
  • create a Firewall and add a rule for allowing SSH (port 22) Inbound
  • create a G1-Small instance based off of Ubuntu 20.04, our public key added as authorized and reference an extraneous file for user_data (initialization script on Virtual Machine boot).
  • tag resources as applicable
$ cat << 'EOF' > compute_engine.tf
> # Create a DNS Zone 
> resource "google_dns_managed_zone" "myweb_zone" {
>   name        = "${var.prefix}-zone"
>   dns_name    = "${var.prefix}.com."
>   description = "${var.prefix}.com"
>   labels = {
>     site = "${var.prefix}-com"
>   }
> }
>
> # Allocate a Static Public IP
> resource "google_compute_address" "myweb_static" {
>   name = "${var.prefix}-ipv4"
> }
>
> # Create an Ubuntu Virtual Machine with key based access and run a script on boot
> resource "google_compute_instance" "myweb_vm" {
>   name         = "${var.prefix}-vm"
>   machine_type = "g1-small"
>   boot_disk {
>     initialize_params {
>       image = "ubuntu-os-cloud/ubuntu-2004-lts"
>     }
>   }
>
>   metadata = {
>     ssh-keys = "ubuntu:${file("~/.ssh/myweb.pub")}"
>   }
>
>   metadata_startup_script = file("~/dev/terraform/gcp/myweb/scripts/install.sh")
>
>   network_interface {
>     network = google_compute_network.myweb_vpc.self_link
>       subnetwork = google_compute_subnetwork.myweb_subn.self_link
>       access_config {
>         nat_ip = google_compute_address.myweb_static.address
>       }
>   }
> 
>   labels = {
>     site = "${var.prefix}-com"
>   }
> }
>
> # Create a Firewall and allow inbound port(s)
> resource "google_compute_firewall" "myweb_vpc" {
>   name    = "${var.prefix}-fw"
>   network = google_compute_network.myweb_vpc.name
>   allow {
>     protocol  = "tcp"
>     ports     = ["22"]
>   }
> }
>
> # Add a Subnet
> resource "google_compute_subnetwork" "myweb_subn" {
>   name          = "${var.prefix}-subn"
>   ip_cidr_range = "10.0.1.0/24"
>   region        = var.region
>   network       = google_compute_network.myweb_vpc.id
> }
>
> # Create a Virtual Private Cloud
> resource "google_compute_network" "myweb_vpc" {
>   name                    = "${var.prefix}-net"
>   auto_create_subnetworks = "false"
> }
> EOF

Output our allocated and attached static Public IP after creation:

$ cat << 'EOF' > output.tf
> output "static_public_ip" {
>    value = google_compute_instance.myweb_vm.network_interface.0.access_config.0.nat_ip
> }
> EOF

Create the shell script for metadata_startup:

$ cat << 'EOF' > scripts/install.sh
> #!/bin/bash
>
> MY_HOME="/home/ubuntu"
> export DEBIAN_FRONTEND=noninteractive
>
> # Install prereqs
> apt update
> apt install -y python3-pip apt-transport-https ca-certificates curl software-properties-common
> # Install docker
> curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
> add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
> apt update
> apt install -y docker-ce
> # Install docker-compose
> su ubuntu -c "mkdir -p $MY_HOME/.local/bin"
> su ubuntu -c "pip3 install docker-compose --upgrade --user && chmod 754 $MY_HOME/.local/bin/docker-compose"
> usermod -aG docker ubuntu
> # Add PATH
> printf "\nexport PATH=\$PATH:$MY_HOME/.local/bin\n" >> $MY_HOME/.bashrc
>
> exit 0
> EOF

Create a Google Cloud project with a random numeric number at the end of it:

$ gcloud projects create myweb-$(tr -cd "[:digit:]" < /dev/urandom | head -c 5) --name=myweb

Assign the created project name to a variable:

$ myweb=$(gcloud projects list | grep myweb | awk '{print $1}')

Create a service account to be used by Terraform:

$ gcloud iam service-accounts create container-admin --description=gcloud-cli --project=$myweb

Create a Private key for the Service Account and download it to .JSON format:

$ gcloud iam service-accounts keys create service-account/$myweb.json --iam-account=$(gcloud iam service-accounts list --project=$myweb | grep container-admin | awk '{print $1}') --key-file-type=json

List your billing accounts:

$ gcloud beta billing accounts list

Link the newly created project with the billing Account:

$ gcloud beta billing projects link $myweb --billing-account=<ACCOUNT ID>

Enable Compute and DNS Google APIs:

$ gcloud services enable compute.googleapis.com dns.googleapis.com --project=$myweb

Add all encompassing IAM roles for required operations:

$ for role in 'roles/compute.networkAdmin' 'roles/dns.admin' 'roles/container.serviceAgent'; do gcloud projects add-iam-policy-binding $myweb --member="serviceAccount:$(gcloud iam service-accounts list --project=$myweb | grep container-admin | awk '{print $1}')" --role=$role; done

Initialize the Terraform directory:

$ terraform init

Run a dry-run to see what will occur:

$ terraform plan -var 'project=$myweb'

Provision:

$ terraform apply -var 'project=$myweb' -auto-approve

Log on to the instance after a short while:

$ ssh -i ~/.ssh/myweb ubuntu@<The value of static_public_ip that was reported.  One can also use 'terraform output static_public_ip' to print it again.>

Type yes and hit enter to accept.

On the host (a short while is needed for the run-once script to complete):

$ docker --version
$ docker-compose --version
$ logout

Tear down what was created by first performing a dry-run to see what will occur:

$ terraform plan -var 'project=$myweb' -destroy 

Tear down the instance:

$ terraform destroy -var 'project=$myweb' -auto-approve

If you would like to remove the project –>

Unlink Billing:

$ gcloud beta billing projects unlink $myweb

Delete the project:

$ gcloud projects delete $myweb --quiet

Remove the .JSON key file from Terraform:

$ rm service-account/$myweb.json

Unset the myweb variable:

$ unset myweb

<–

References:

Source:

How to install Google Cloud SDK in Linux (Ubuntu, CentOS)

Get Started on Google Cloud with CLI

Azure/AWS – Federated login from AAD to the AWS Console

Federated Identity allows us to access various systems with a single authentication token that is trusted. It provides us with SSO (Single Sign on) and allows us to enter systems without different repeated password entries.

In this article we will setup Federated Access to the AWS Console from Microsoft Azure (via Azure Active Directory), giving us SSO capability.

We will be using the SystemAdministrator AWS Managed Policy and the user will be container_admin to coincide with the rest of the articles in this section.

Login to the AWS Console as an Administrator with Full access and/or the root account.
Login to Microsoft Azure as Global Administrator of the Tenant you are wanting to SSO from.

Enable AWS Organizations (if you have already enabled organizations then Settings -> Enable all features):
AWS -> All Services -> AWS Organizations -> Create organization -> Enable all features

Check your email and click the link sent from AWS to finalize the change of enabling all features

Enable AWS SSO:
AWS -> All Services -> AWS Single Sign-On -> Enable AWS SSO

Create the Azure user you are wanting to use for SSO:
Azure Active Directory -> Users | All users -> New user -> Create user -> User name: container_admin -> Name: container admin -> First name: container -> Last name: admin -> Password -> Auto-generate password
-> Tick Show Password (Take note of the password) -> Create

Add the AWS Gallery application in to Azure AD via Enterprise applications:
Azure -> Azure Services -> Azure Active Directory -> Enterprise applications (Manage) -> All applications (Manage) -> New application -> Cloud platforms: Amazon Web Services (AWS) (black bordered box) -> Create

Enterprise Application | All applications -> Amazon Web Services (AWS) -> Single sign-on (Manage) -> SAML -> Click Yes on Save single sign-on setting popup (Identifier (Entity ID) and Reply URL)

Create a SAML (Security Assertion Markup Language) certificate and activate it:
SAML Signing Certificate -> Edit -> Add a certificate -> New Certificate -> Save -> Select the toolbar on the certificate -> Make certificate active -> Yes -> Close pane

Dismiss the ‘Test single sign-on with Amazon Web Services (AWS)’ popup (i.e. No, I’ll test later).

Download the Federation data for the later created Identity Provider in AWS:
SAML Signing Certificate -> Download Federation Metadata XML

Modify the Federated SSO session to 3 hours (from 15 minutes (900 seconds)):
User Attributes & Claims -> Edit -> Select the SessionDuration Claim name -> Source attribute -> “10800” -> Save

Create the Identity Provider:
AWS -> IAM -> Identity Providers -> Create Provider -> Provider Type -> SAML -> Provider Name: AzureAD -> Metadata Document -> Choose File (the downloaded Federated Metadata XML file) -> Next Step -> Create

Create the role and assign the SystemAdministrator AWS managed policy to it:
IAM -> Roles -> Create role -> SAML 2.0 federation -> SAML provider -> AzureAD -> Allow programmatic and AWS Management Console access -> Next permissions -> select SystemAdministrator -> Next: Tags -> Next: Review -> Role name: SystemAdministrator -> Create role

Create a policy which will allow a fetch of roles:
IAM -> Policies -> Create policy -> JSON ->

 {
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
            "iam:ListRoles"
            ],
            "Resource": "*"
        }
    ]
}

-> Review policy -> Name: AzureAD_SSOUserRole_Policy -> Description: This policy will allow to fetch the roles from AWS accounts. -> Create policy

Create a user (to be used in Azure’s AWS Gallery application for Provisioning) and assign it the previously created policy:
IAM -> Users -> Add user -> User name: AzureADRoleManager -> select Programmatic access -> Next: Permissions -> Attach existing policies directly -> select AzureAD_SSOUserRole_Policy -> Next: Tags -> Next: Review -> Create user -> Take note of the Access ID and the Secret Access key

Configure Provisioning:
Azure -> Enterprise applications | All applications -> Amazon Web Services (AWS) -> Provisioning -> Get started -> Provisioning Mode: Automatic -> Admin Credentials: clientsecret (enter in the Access ID) -> Secret Token: (enter in the Secret Access Key) -> Test Connection -> Save -> Provisioning Status: On -> Save

Wait some time for the Incremental cycle to be completed (it should show finished shortly thereafter but you should wait another 15 minutes or so).

Assign a user to the AWS gallery application:
Users and groups -> Add user -> Users and groups -> select container_admin -> select -> Select a role -> SystemAdministrator,AzureAD -> Select -> Assign

Note: If you see DefaultAccess and/or you are not able to select the role, assign the user then go back in and assign the role (it must say SystemAdministrator,AzureAD)

In a browser:
https://myapplications.microsoft.com/ -> Login as container_admin@TENANT.onmicrosoft.com -> Change the password -> Click on the Amazon Web Services icon -> click on container_admin@TENANT.onmicrosoft.com in the popup list -> You will be taken to the AWS Console

Up top, select the user and in the dropdown you will see:
Federated Login: SystemAdministrator/container_admin@TENANT.onmicrosoft.com

Source:
amazon-web-service-tutorial

AWS/Terraform/Ansible/OpenShift – Provision an EC2 instance and further configure it using Infrastructure as Code

Note: This is a duplicate of the AWS Lightsail article, modified for EC2 with some additional amendments.

In this article we will Provision an EC2 host with docker/docker-compose on it using Terraform and install/initialize OpenShift Origin on it using Ansible.

OpenShift is Red Hat’s containerization platform which utilizes Kubernetes. Origin (what we will be working with here) is the opensource implementation of it.

We will use ‘myweb’ as an example in this article, using the same base path of ‘dev’ that was previously created, the container-admin group and using ~/.local/bin for the binaries.

Please ensure you have gone through the previous Terraform, Ansible and related preceding articles.

Please use AWS Free Tier prior to commencing with this article.

–>
Go in to the dev directory/link located within your home directory:

$ cd ~/dev

Update PIP:

$ python3 -m pip install --upgrade --user pip

If there was an update, then forget remembered location references in the shell environment:

$ hash -r pip 

Upgrade the AWS CLI on your host:

$ pip3 install awscli --upgrade --user && chmod 754 ~/.local/bin/aws

Install/Upgrade Ansible:

$ pip3 install ansible --upgrade --user && chmod 754 ~/.local/bin/ansible ~/.local/bin/ansible-playbook

Install/Upgrade Boto3:

$ pip3 install boto3 --upgrade --user

Grab the latest version of Terraform:

$ wget https://releases.hashicorp.com/terraform/0.12.23/terraform_0.12.23_linux_amd64.zip

Unzip it to ~/.local/bin and set permissions accordingly on it (type y and hit enter to replace if upgrading, at the prompt):

$ unzip terraform_0.12.23_linux_amd64.zip -d ~/.local/bin && chmod 754 ~/.local/bin/terraform

Change to the myweb directory inside terraform/aws:

$ cd terraform/aws/myweb

Change our instance from a micro to a medium, so it will have sufficient resources to run OpenShift Origin and related:

$ sed -i s:t3a.micro:t3a.medium: ec2.tf

Output the Public IP of the Provisioned host (along with connection parameters and variables) in to a file which we will feed in to an Ansible playbook run.

Note: Please re-create the file if you have went through the previous Terraform articles:

$ cat << 'EOF' > output.tf
> output "static_public_ip" {
>   value = var.lightsail ? element(aws_lightsail_static_ip.myweb[*].ip_address, 0) : element(aws_eip.external[*].public_ip, 0)
> }
>
> resource "local_file" "hosts" {
>   content              = trimspace("[vps]\n${var.lightsail ? element(aws_lightsail_static_ip.myweb[*].ip_address, 0) : element(aws_eip.external[*].public_ip, 0)} ansible_connection=ssh ansible_user=ubuntu ansible_ssh_private_key_file=~/.ssh/${var.prefix} instance=${var.lightsail ? element(aws_lightsail_instance.myweb[*].name, 0) : element(aws_instance.myweb[*].tags["Name"], 0)} ${var.lightsail ? "" : "instance_sg=${element(aws_security_group.myweb[*].name, 0)}"} ${var.lightsail ? "" : "instance_sg_id=${element(aws_security_group.myweb[*].id, 0)}"} ${var.lightsail ? "" : "instance_vpc_id=${element(aws_vpc.myweb[*].id, 0)}"}")
>   filename             = pathexpand("~/dev/ansible/hosts-aws")
>   directory_permission = 0754
>   file_permission      = 0664
> }
> EOF

Amend an item from the user_data script (if you have went through the AWS/Terraform/Ansible/OpenShift against Lightsail article then this can be disregarded):

$ sed -i 's:sudo apt-key add -:apt-key add -:' scripts/install.sh

Initialize the directory/refresh module(s):

$ terraform init

Run a dry-run to see what will occur:

$ terraform plan -var 'lightsail=false'

Provision:

$ terraform apply -var 'lightsail=false' -auto-approve

Create a work folder for an Ansible playbook:

$ cd ../../../ansible
$ mkdir -p openshift/scripts && cd openshift

Create an Ansible playbook which will install/initialize OpenShift Origin on our provisioned host.

Note: This accommodates our previous implementation against AWS Lightsail and Microsoft Azure VM:

$ cat << 'EOF' > openshift.yml 
> # Install, initialize OpenShift Origin and create a destroy routine for it
> # This is a unified setup against AWS Lightsail, Microsoft Azure VM and AWS EC2
> ---
> - hosts: vps
>   connection: local
>
>   vars:
>     network_security_group: "{{ hostvars[groups['vps'][0]].instance_nsg }}"
>     instance: "{{ hostvars[groups['vps'][0]].instance }}"
>     resource_group: "{{ hostvars[groups['vps'][0]].instance_rg }}"
>     security_group: "{{ hostvars[groups['vps'][0]].instance_sg }}"
>     security_group_id: "{{ hostvars[groups['vps'][0]].instance_sg_id }}"
>     virtual_private_cloud_id: "{{ hostvars[groups['vps'][0]].instance_vpc_id }}"
>     openshift_directory: /home/ubuntu/.local/etc/openshift
>     ansible_python_interpreter: /usr/bin/python3
>
>   tasks:
>     - name: Discover Services
>       service_facts:
>
>     - name: Check if openshift directory exists
>       stat:
>         path: "{{ openshift_directory }}"
>       register: openshift_dir
>       tags: [ 'destroy' ]
>
>     - name: Open Firewall Ports (AWS Lightsail)
>       delegate_to: localhost
>       args:
>         executable: /bin/bash
>       script: "./scripts/firewall.sh open {{ instance }}"
>       when:
>         - "'instance_nsg' not in hostvars[groups['vps'][0]]"
>         - "'instance_sg' not in hostvars[groups['vps'][0]]"
>         - "'docker' in services"
>         - openshift_dir.stat.exists == False
>
>     - name: Add Network Security Group rules (Microsoft Azure VM)
>       delegate_to: localhost
>       azure_rm_securitygroup:
>         name: "{{ network_security_group }}"
>         resource_group: "{{ resource_group }}"
>         rules:
>          - name: OpenShift-Tcp
>            priority: 1002
>            direction: Inbound
>            access: Allow
>            protocol: Tcp
>            source_port_range: "*"
>            destination_port_range:
>              - 80
>              - 443
>              - 1936
>              - 4001
>              - 7001
>              - 8443
>              - 10250-10259
>            source_address_prefix: "*"
>            destination_address_prefix: "*"
>          - name: OpenShift-Udp
>            priority: 1003
>            direction: Inbound
>            access: Allow
>            protocol: Udp
>            source_port_range: "*"
>            destination_port_range:
>              - 53
>              - 8053
>            source_address_prefix: "*"
>            destination_address_prefix: "*"
>        state: present
>      when:
>        - "'instance_nsg' in hostvars[groups['vps'][0]]"
>        - "'instance_sg' not in hostvars[groups['vps'][0]]"
>        - "'docker' in services"
>        - openshift_dir.stat.exists == False
>
>    - name: Add Security Group rules (AWS EC2)
>      delegate_to: localhost
>      ec2_group:
>        name: "{{ security_group }}"
>        description: OpenShift
>        vpc_id: "{{ virtual_private_cloud_id }}"
>        purge_rules: no
>        rules:
>         - proto: tcp
>           ports:
>             - 80
>             - 443
>             - 1936
>             - 4001
>             - 7001
>             - 8443
>             - 10250-10259
>           cidr_ip: 0.0.0.0/0
>           rule_desc: OpenShift-Tcp
>         - proto: udp
>           ports:
>             - 53
>             - 8053
>           cidr_ip: 0.0.0.0/0
>           rule_desc: OpenShift-Udp
>       state: present
>     when:
>       - "'instance_nsg' not in hostvars[groups['vps'][0]]"
>       - "'instance_sg' in hostvars[groups['vps'][0]]"
>       - "'docker' in services"
>       - openshift_dir.stat.exists == False
>
>   - name: Copy and Run install
>     environment:
>       PATH: "{{ ansible_env.PATH}}:{{ openshift_directory }}/../../bin"
>     args:
>       executable: /bin/bash
>     script: "./scripts/install.sh {{ ansible_ssh_host }}"
>     when:
>       - "'docker' in services"
>       - openshift_dir.stat.exists == False
>
>   - debug: msg="Please install docker to proceed."
>     when: "'docker' not in services"
>
>   - debug: msg="Install script has already been completed.  Run this playbook with the destroy tag, then run once again normally to re-intialize openshift."
>     when: openshift_dir.stat.exists == True
>
>   - name: Destroy
>     become: yes
>     environment:
>       PATH: "{{ ansible_env.PATH }}:{{ openshift_directory }}/../../bin"
>     args:
>       executable: /bin/bash
>     shell:
>       "cd {{ openshift_directory }} && oc cluster down && cd ../ && rm -rf {{ openshift_directory }}/../../../.kube {{ openshift_directory }}"
>     when: openshift_dir.stat.exists == True
>     tags: [ 'never', 'destroy' ]
>
>   - name: Close Firewall Ports (AWS Lightsail)
>     delegate_to: localhost
>     args:
>       executable: /bin/bash
>     script: "./scripts/firewall.sh close {{ instance }}"
>     when:
>       - "'instance_nsg' not in hostvars[groups['vps'][0]]"
>       - "'instance_sg' not in hostvars[groups['vps'][0]]"
>     tags: [ 'never', 'destroy' ]
>
>   - name: Delete Network Security Group rules (Microsoft Azure VM)
>     delegate_to: localhost
>     command:
>       bash -ic "az-login-sp && (az network nsg rule delete -g {{ resource_group }} --nsg-name {{ network_security_group }} -n {{ item }})"
>     with_items:
>       - OpenShift-Tcp
>       - OpenShift-Udp
>     when:
>       - "'instance_nsg' in hostvars[groups['vps'][0]]"
>       - "'instance_sg' not in hostvars[groups['vps'][0]]"
>     tags: [ 'never', 'destroy' ]
>
>   - name: Delete Security Group rules (AWS EC2)
>     delegate_to: localhost
>     command:
>       bash -c "[[ {{ item }} -eq 53 || {{ item }} -eq 8053 ]] && protocol=udp || protocol=tcp && aws ec2 revoke-security-group-ingress --group-id {{ security_group_id }} --port {{ item }} --protocol $protocol --cidr 0.0.0.0/0"
>     with_items:
>       - 80
>       - 443
>       - 1936
>       - 4001
>       - 7001
>       - 8443
>       - 10250-10259
>       - 53
>       - 8053
>     when:
>       - "'instance_nsg' not in hostvars[groups['vps'][0]]"
>       - "'instance_sg' in hostvars[groups['vps'][0]]"
>     tags: [ 'never', 'destroy' ]
> EOF

Create a shell script which will pull the latest release of client tools from GitHub, place the needed binaries in ~/.local/bin, set insecure registry on Docker and initialize (if you have went through the AWS/Terraform/Ansible/OpenShift against Lightsail article then this can be disregarded):

$ cat << 'EOF' > scripts/install.sh
> #!/bin/bash
> [[ -z $* ]] && { echo "Please specify a Public IP or Host/Domain name." && exit 1; }
> # Fetch and Install
> file_url="$(curl -sL https://github.com/openshift/origin/releases/latest | grep "download.*client.*linux-64" | cut -f2 -d\" | sed 's/^/https:\/\/github.com/')"
> [[ -z $file_url ]] && { echo "The URL could not be obtained.  Please try again shortly." && exit 1; }
> file_name="$(echo $file_url | cut -f9 -d/)"
> if [[ ! -f $file_name ]]; then
>         curl -sL $file_url --output $file_name
>         folder_name="$(tar ztf $file_name 2>/dev/null | head -1 | sed s:/.*::)"
>         [[ -z $folder_name ]] && { echo "The archive could not be read.  Please try again." && rm -f $file_name && exit 1; }
>         tar zxf $file_name
>         mv $folder_name/oc $folder_name/kubectl $HOME/.local/bin && rm -r $folder_name
>         chmod 754 $HOME/.local/bin/oc $HOME/.local/bin/kubectl
> fi
> # Docker insecure
> [[ $(grep insecure /etc/docker/daemon.json &>/dev/null; echo $?) -eq 2 ]] && redirect=">"
> [[ $(grep insecure /etc/docker/daemon.json &>/dev/null; echo $?) -eq 1 ]] && redirect=">>"
> [[ $(grep insecure /etc/docker/daemon.json &>/dev/null; echo $?) -eq 0 ]] || { sudo bash -c "cat << 'EOF' $redirect /etc/docker/daemon.json
> {
>         \"insecure-registries\" : [ \"172.30.0.0/16\" ]
> }
> EOF" && sudo systemctl restart docker; }
> # OpenShift Origin up
> [[ ! -d $HOME/.local/etc/openshift ]] && { mkdir -p $HOME/.local/etc/openshift && cd $HOME/.local/etc/openshift; } || { cd $HOME/.local/etc/openshift && oc cluster down; }
> oc cluster up --public-hostname=$1
>
> exit 0
> EOF 

Note: If you have already went through the AWS/Terraform/Ansible/OpenShift for Lightsail article or you don’t want to use Lightsail, then this can be disregarded.

The Lightsail firewall functionality is currently being implemented in Terraform and is not available in Ansible. In the interim, we will create a shell script to open and close ports needed by OpenShift Origin (using the AWS CLI). This script will be run locally via the Playbook during the create and destroy routines.

Note2: Port 80 is already open when the Lightsail host is provisioned:

$ cat << 'EOF' > scripts/firewall.sh && chmod 754 scripts/firewall.sh
> #!/bin/bash
> #
> openshift_ports="53/UDP 443/TCP 1936/TCP 4001/TCP 7001/TCP 8053/UDP 8443/TCP 10250_10259/TCP"  
> #
> [[ -z $* || $(echo $* | xargs -n1 | wc -l) -ne 2 || ! ($* =~ $(echo '\<open\>') || $* =~ $(echo '\<close\>')) ]] && { echo "Please pass in the desired action [ open, close ] and instance [ site_myweb ]." && exit 2; }
> #
> instance="$(echo $* | xargs -n1 | sed '/\<open\>/d; /\<close\>/d')"
> [[ -z $instance ]] && { echo "Please double-check the passed in instance." && exit 1; }
> action="$(echo $* | xargs -n1 | grep -v $instance)"
> #
> for port in $openshift_ports; do
>         aws lightsail $action-instance-public-ports --instance $instance --port-info fromPort=$(echo $port | cut -f1 -d_ | cut -f1  -d/),protocol=$(echo $port | cut -f2 -d/),toPort=$(echo $port | cut -f2 -d_ | cut -f1 -d/)
> done
> #
>
> exit 0
> EOF 

Run the Ansible playbook after a few minutes (accept the host key by typing yes and hitting enter when prompted):

$ ansible-playbook -i ../hosts-aws openshift.yml

Note: Disregard the warning regarding mismatch descriptions on the Security Group. This will not be modified so the original description was not exported out to be used here.

Note2: If a Terraform apply is run again after the security group modification (addition of rules for OpenShift), then those rules will be destroyed. In that case, please run a Playbook destroy then run again to reinitialize.

After a short while, log on to the instance:

$ ssh -i ~/.ssh/myweb ubuntu@<The value of static_public_ip that was reported.  One can also use 'terraform output static_public_ip' to print it again.>

To get an overview of the current project with any identified issues:

$ oc status --suggest

Log on as Admin via CMD Line and switch to the default project:

$ oc login -u system:admin -n default

Logout of the session:

$ oc logout

Please see the Command-Line Walkthrough.

Logout from the host:

$ logout

Log on as Admin via Web Browser (replace <PUBLIC_IP>):

https://<PUBLIC_IP>:8443/console (You will get a Certificate/Site warning due to a mismatch).

Please see the Web Console Walkthrough.

To shut down the OpenShift Origin cluster, destroy the working folder and start anew (you can re-run the playbook normally to reinitialize):

$ ansible-playbook -i ../hosts openshift.yml --tags "destroy"

Tear down what was created by first performing a dry-run to see what will occur:

$ cd ../../terraform/aws/myweb && terraform plan -var 'lightsail=false' -destroy 

Tear down the instance:

$ terraform destroy -var 'lightsail=false' -auto-approve

<–

References:
how-to-install-openshift-origin-on-ubuntu-18-04

Source:
ansible_openshift

AWS/Ansible – Provision an EC2 instance using Infrastructure as Code

Note: Some of this is a duplicate of the AWS Lightsail article; modified for EC2.

EC2 is the compute service in AWS. It is flexible, adaptable, scalable and is able to run Virtual Machine workloads to fit most every need.

In this article we will use Ansible (Infrastructure as Code) to swiftly bring up an AWS EC2 instance in us-east-1 on a static IP (Elastic IP), in a new VPC with an Internet Gateway, add a DNS Zone (Route 53) for the site in mention and install docker/docker-compose on it.

We will use ‘myweb’ as an example in this article, using the same base path of ‘dev’ that was previously created, the container-admin group (some of the IAM policy implemented there will be in use here) and using ~/.local/bin|lib for the binaries/libraries.

Please use AWS Free Tier prior to commencing with this article.

–>
Go in to the dev directory/link located within your home directory:

$ cd ~/dev

Install/Upgrade Ansible:

$ pip3 install ansible --upgrade --user && chmod 754 ~/.local/bin/ansible ~/.local/bin/ansible-playbook

Install/Upgrade Boto3:

$ pip3 install boto3 --upgrade --user

Install/Upgrade Boto (required by ec2_eip):

$ pip3 install boto --upgrade --user

Create a work folder and change in to it:

$ mkdir -p ansible/myweb/scripts && cd ansible/myweb

Add an IAM Policy to the container-admin group so it will have access to EC2 and related (EIP/VPC/Routes/IGW/Route 53/SG/KeyPair):
AWS UI Console -> Services -> Security, Identity, & Compliance -> IAM -> Policies -> Create Policy -> JSON (replace <AWS ACCOUNT ID> in the Resource arn with your Account’s ID (shown under the top right drop-down (of your name) within the My Account page next to the Account Id: under Account Settings)).

Note: This is identical to the section in the AWS/Terraform article, but adds an allowance for route53:ListHostedZones, ec2:DescribeInstanceStatus and ec2:UpdateSecurityGroupRuleDescriptionsEgress:

 {
     "Version": "2012-10-17",
     "Statement": [
         {
             "Effect": "Allow",
             "Action": [
                 "ec2:UpdateSecurityGroupRuleDescriptionsEgress",
                 "ec2:TerminateInstances",
                 "route53:GetChange",
                 "route53:GetHostedZone",
                 "route53:ChangeTagsForResource",
                 "route53:DeleteHostedZone",
                 "route53:ListTagsForResource" 
             ],
             "Resource": [
                "arn:aws:ec2:*:<AWS ACCOUNT ID>:security-group/",
                "arn:aws:ec2:*:<AWS ACCOUNT ID>:instance/*",
                "arn:aws:route53:::hostedzone/*",
                "arn:aws:route53:::change/*"
             ]      
         },
         {
             "Effect": "Allow",
             "Action": [
                 "ec2:DisassociateAddress",
                 "ec2:DeleteSubnet",
                 "ec2:DescribeAddresses",
                 "ec2:DescribeInstances",
                 "ec2:DescribeInstanceAttribute",
                 "ec2:CreateVpc",
                 "ec2:AttachInternetGateway",
                 "ec2:DescribeVpcAttribute",
                 "ec2:AssociateRouteTable",
                 "ec2:DescribeInternetGateways",
                 "ec2:DescribeNetworkInterfaces",
                 "ec2:CreateInternetGateway",
                 "ec2:CreateSecurityGroup",
                 "ec2:DescribeVolumes",
                 "ec2:DescribeAccountAttributes",
                 "ec2:ModifyVpcAttribute",
                 "ec2:DescribeKeyPairs",
                 "ec2:DescribeNetworkAcls",
                 "ec2:DescribeRouteTables",
                 "ec2:DescribeInstanceStatus",
                 "ec2:ReleaseAddress",
                 "ec2:ImportKeyPair",
                 "ec2:DescribeTags",
                 "ec2:DescribeVpcClassicLinkDnsSupport",
                 "ec2:CreateRouteTable",
                 "ec2:DetachInternetGateway",
                 "ec2:DisassociateRouteTable",
                 "ec2:AllocateAddress",
                 "ec2:DescribeInstanceCreditSpecifications",
                 "ec2:DescribeSecurityGroups",
                 "ec2:DescribeVpcClassicLink",
                 "ec2:DescribeImages",
                 "ec2:DescribeVpcs",
                 "ec2:DeleteVpc",
                 "ec2:AssociateAddress",
                 "ec2:CreateSubnet",
                 "ec2:DescribeSubnets",
                 "ec2:DeleteKeyPair",
                 "route53:CreateHostedZone",
                 "route53:ListHostedZones",
                 "sts:GetCallerIdentity"
             ],
             "Resource": "*"
         }
     ]
 }

Review Policy ->

Name: AllowEC2
Description: Allow access to EC2 and related.

Create Policy.

Groups -> container-admin -> Attach Policy -> Search for AllowEC2 -> Attach Policy.

Generate an SSH Key Pair (no password) and restrict permissions on it:

$ ssh-keygen -q -t rsa -b 2048 -N '' -f ~/.ssh/myweb && chmod 400 ~/.ssh/myweb

Create a hosts file and specify localhost:

$ cat << 'EOF' > hosts
> [local]
> localhost
> EOF

The following is performed with this script/code:

  • create a Route 53 DNS Zone of myweb.com (no A records will be added)
  • create a Virtual Private Cloud for network 10.0.0.0/16 (tenancy is default)
  • add a subnet of 10.0.1.0/24 within the VPC
  • allocate a static Public IP
  • create a Security Group and add a Security rule for allowing SSH (port 22) Inbound
  • create an Internet Gateway and add a route out to it
  • create a T3a.micro instance (tenancy is default) based off of Ubuntu 18_04, our public key added as authorized and reference an extraneous file for user_data (initialization script on Virtual Machine boot). Elastic/Root Block Store is GP2
  • DNS support is enabled but DNS host names is not
  • tag all resources

Note: assign_public_ip was needed (an assignment at boot from the Amazon pool) so as not to disrupt user_data execution, due to Elastic IP (Static) being bound a bit later:

$ cat << 'EOF' > aws_ec2.yml
> # Create an AWS EC2 instance and add a way to destroy it
> ---
> - hosts: local
>   connection: local
>
>   vars:
>     region: us-east-1
>     prefix: myweb
>     subnet_name: internal
>
>   tasks:
>   - name: Create a DNS Zone
>     route53_zone:
>       state: present
>       zone: "{{ prefix }}.com"
>       comment: "{{ prefix }}-dn"
>
>   - name: Create a Virtual Private Cloud
>     ec2_vpc_net:
>       state: present
>       name: "{{ prefix }}-vpc"
>       cidr_block: 10.0.0.0/16
>       region: "{{ region }}"
>       dns_hostnames: no
>       tenancy: default
>       tags:
>           Site: "{{ prefix }}.com"
>           Name: "{{ prefix }}-vpc"
>     register: vpc
>
>   - name: Create a Security Group and allow inbound port(s)
>     ec2_group:
>       state: present
>       name: "{{ prefix }}"
>       description: Allow Ports
>       vpc_id: "{{ vpc.vpc.id }}"
>       region: "{{ region }}"
>       rules:
>         - proto: tcp
>           from_port: 22
>           to_port: 22
>           cidr_ip: 0.0.0.0/0
>           rule_desc: SSH
>       rules_egress:
>         - proto: -1
>           from_port: 0
>           to_port: 0
>           cidr_ip: 0.0.0.0/0
>           rule_desc: All
>       tags:
>           Site: "{{ prefix }}.com"
>           Name: "{{ prefix }}-sg"
>     register: sg
>
>   - name: Add a Subnet
>     ec2_vpc_subnet:
>       state: present
>       vpc_id: "{{ vpc.vpc.id }}"
>       cidr: 10.0.1.0/24
>       region: "{{ region }}"
>       az: "{{ region }}a"
>       tags:
>           Site: "{{ prefix }}.com"
>           Name: "{{ subnet_name }}"
>     register: internal
>
>   - name: Create an Internet Gateway
>     ec2_vpc_igw:
>       state: present
>       vpc_id: "{{ vpc.vpc.id }}"
>       region: "{{ region }}"
>       tags:
>           Site: "{{ prefix }}.com"
>           Name: "{{ prefix }}-igw"
>     register: igw
>
>   - name: Add a route to the Internet Gateway
>     ec2_vpc_route_table:
>       state: present
>       vpc_id: "{{ vpc.vpc.id }}"
>       region: "{{ region }}"
>       subnets: "{{ internal.subnet.id }}"
>       routes:
>         - dest: 0.0.0.0/0
>           gateway_id: "{{ igw.gateway_id }}"
>       tags:
>           Site: "{{ prefix }}.com"
>           Name: "{{ prefix }}-rt"
>
>   - name: Add Public Key as authorized
>     ec2_key:
>       state: present
>       name: "{{ prefix }}"
>       key_material: "{{ lookup('file', '~/.ssh/{{ prefix }}.pub') }}"
>       region: "{{ region }}"
>
>   - name: Select Ubuntu 18.04
>     ec2_ami_info:
>       region: "{{ region }}"
>       owners: 099720109477 # Canonical
>       filters:
>         name: "ubuntu/images/hvm-ssd/ubuntu-bionic-18.04-amd64-server-*"
>     register: ec2_ami
>
>     # Get the latest Ubuntu 18.04 AMI
>   - set_fact:
>       ec2_ami_latest: "{{ ec2_ami.images | selectattr('name', 'defined') | sort(attribute='creation_date') | last }}"
>
>   - name: Create an Ubuntu Virtual Machine with key based access and run a script on boot
>     ec2_instance:
>       state: present
>       name: "{{ prefix }}-ec2"
>       key_name: "{{ prefix }}"
>       region: "{{ region }}"
>       instance_type: t3a.micro
>       image_id: "{{ ec2_ami_latest.image_id }}"
>       security_group: "{{ sg.group_id }}"
>       network:
>         assign_public_ip: true
>       wait: yes
>       wait_timeout: 500
>       vpc_subnet_id: "{{ internal.subnet.id }}"
>       tenancy: default
>       user_data: "{{ lookup('file', './scripts/install.sh') }}"
>       tags:
>           Site: "{{ prefix }}.com"
>     register: ec2
>
>   - name: Allocate and Associate a Static Public IP
>     ec2_eip:
>       state: present
>       region: "{{ region }}"
>       in_vpc: yes
>       reuse_existing_ip_allowed: yes
>       device_id: "{{ ec2.instance_ids[0] }}"
>     register: eip
>
>   - debug: msg="Public IP (Static) is {{ eip.public_ip }} for {{ ec2.instances[0].tags.Name }}"
>     when: eip.public_ip is defined
>
>   - debug: msg="Run this playbook for {{ ec2.instances[0].tags.Name }} shortly to Allocate, Associate and list the Static Public IP."
>     when: eip.public_ip is not defined
>
>   - name: Destroy the Elastic IP
>     # Gather EC2 info.
>     ec2_instance_info:
>       filters:
>         tag:Site: "{{ prefix }}.com"
>         tag:Name: "{{ prefix }}-ec2"
>         instance-state-name: [ "running", "present", "started", "stopped" ]
>     register: ec2
>     tags: [ 'never', 'destroy' ]
>
>     # Gather EIP info.
>   - ec2_eip_info:
>       filters:
>         instance-id: "{{ ec2.instances[0].instance_id }}"
>     register: eip
>     when: ec2.instances[0].instance_id is defined
>     tags: [ 'never', 'destroy' ]
>
>   - ec2_eip:
>       state: absent
>       region: "{{ region }}"
>       device_id: "{{ eip.addresses[0].instance_id }}"
>       release_on_disassociation: yes
>     when: eip.addresses[0].instance_id is defined
>     tags: [ 'never', 'destroy' ]
>
>   - name: Destroy the Elastic Compute 2 instance
>     ec2_instance:
>       state: absent
>       instance_ids: "{{ ec2.instances[0].instance_id }}"
>     when: ec2.instances[0].instance_id is defined
>     tags: [ 'never', 'destroy' ]
>
>   - name: Destroy the Public Key
>     ec2_key:
>       state: absent
>       name: "{{ prefix }}"
>     tags: [ 'never', 'destroy' ]
>
>   - name: Destroy the Route to the Internet Gateway
>     # Gather Route info.
>     ec2_vpc_route_table_info:
>       region: "{{ region }}"
>       filters:
>         tag:Site: "{{ prefix }}.com"
>         tag:Name: "{{ prefix }}-rt"
>     register: rt
>     tags: [ 'never', 'destroy' ]
>
>   - ec2_vpc_route_table:
>       state: absent
>       vpc_id: "{{ rt.route_tables[0].vpc_id }}"
>       region: "{{ region }}"
>       route_table_id: "{{ rt.route_tables[0].id }}"
>       lookup: id
>     when: rt.route_tables[0].vpc_id is defined
>     tags: [ 'never', 'destroy' ]
>
>   - name: Destroy the Subnet
>     # Gather Subnet info.
>     ec2_vpc_subnet_info:
>       filters:
>         tag:Site: "{{ prefix }}.com"
>         tag:Name: "{{ subnet_name }}"
>     register: internal
>     tags: [ 'never', 'destroy' ]
>
>   - ec2_vpc_subnet:
>       state: absent
>       vpc_id: "{{ internal.subnets[0].vpc_id }}"
>       cidr: "{{ internal.subnets[0].cidr_block }}"
>       when: internal.subnets[0].vpc_id is defined
>     tags: [ 'never', 'destroy' ]
>
>   - name: Destroy the Internet Gateway
>     # Gather IGW info.
>     ec2_vpc_igw_info:
>       filters:
>         tag:Site: "{{ prefix }}.com"
>         tag:Name: "{{ prefix }}-igw"
>     register: igw
>     tags: [ 'never', 'destroy' ]
>
>   - ec2_vpc_igw:
>       state: absent
>       vpc_id: "{{ igw.internet_gateways[0].attachments[0].vpc_id }}"
>       region: "{{ region }}"
>     when: igw.internet_gateways[0].attachments[0].vpc_id is defined
>     tags: [ 'never', 'destroy' ]
>
>   - name: Destroy the Security Group
>     # Gather SG info.
>     ec2_group_info:
>       filters:
>         tag:Site: "{{ prefix }}.com"
>         tag:Name: "{{ prefix }}-sg"
>     register: sg
>     tags: [ 'never', 'destroy' ]
>
>   - ec2_group:
>       state: absent
>       group_id: "{{ sg.security_groups[0].group_id }}"
>     when: sg.security_groups[0].group_id is defined
>     tags: [ 'never', 'destroy' ]
>
>   - name: Destroy the Virtual Private Cloud
>     # Gather VPC info.
>     ec2_vpc_net_info:
>       filters:
>         tag:Site: "{{ prefix }}.com"
>         tag:Name: "{{ prefix }}-vpc"
>     register: vpc
>     tags: [ 'never', 'destroy' ]
>
>   - ec2_vpc_net:
>       state: absent
>       name: "{{ prefix }}-vpc"
>       cidr_block: "{{ vpc.vpcs[0].cidr_block }}"
>       region: "{{ region }}"
>     when: vpc.vpcs[0].cidr_block is defined
>     tags: [ 'never', 'destroy' ]
>
>   - name: Destroy the DNS Zone
>     route53_zone:
>       state: absent
>       zone: "{{ prefix }}.com"
>       tags: [ 'never', 'destroy' ]
> EOF 

Create the shell script for user_data.

Note: If you have gone through the AWS/Ansible against Lightsail article, then this can be disregarded:

$ cat << 'EOF' > scripts/install.sh
> #!/bin/bash
>
> MY_HOME="/home/ubuntu"
> export DEBIAN_FRONTEND=noninteractive
>
> # Install prereqs
> apt update
> apt install -y python3-pip apt-transport-https ca-certificates curl software-properties-common
> # Install docker
> curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
> add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
> apt update
> apt install -y docker-ce
> # Install docker-compose
> su ubuntu -c "mkdir -p $MY_HOME/.local/bin"
> su ubuntu -c "pip3 install docker-compose --upgrade --user && chmod 754 $MY_HOME/.local/bin/docker-compose"
> usermod -aG docker ubuntu
> # Add PATH
> printf "\nexport PATH=\$PATH:$MY_HOME/.local/bin\n" >> $MY_HOME/.bashrc
>
> exit 0
> EOF

Run the playbook:

$ ansible-playbook -i hosts aws_ec2.yml

Log on to the instance after a short while:

$ ssh -i ~/.ssh/myweb ubuntu@<The value of public_ip that was reported.  One can also re-run the playbook to print it again.>

Type yes and hit enter to accept.

On the host (a short while is needed for the run-once script to complete):

$ docker --version
$ docker-compose --version
$ logout

Tear down the instance:

$ ansible-playbook -i hosts aws_ec2.yml --tags "destroy"

<–

References:

Source:
ansible_myweb

AWS/Terraform – Provision an EC2 instance using Infrastructure as Code

Note: Some of this is a duplicate of the AWS Lightsail article, modified for EC2, a recent Terraform 0.12.x version and accommodates the previous Lightsail implementation. If you would like to use Lightsail, please follow the IAM specific instructions in that article.

EC2 is the compute service in AWS. It is flexible, adaptable, scalable and is able to run Virtual Machine workloads to fit most every need.

In this article we will use Terraform (Infrastructure as Code) to swiftly bring up an AWS EC2 instance in us-east-1 on a static IP (Elastic IP), in a new VPC with an Internet Gateway, add a DNS Zone (Route 53) for the site in mention and install docker/docker-compose on it.

We will use ‘myweb’ as an example in this article, using the same base path of ‘dev’ that was previously created, the container-admin group (some of the IAM policy implemented there will be in use here) and using ~/.local/bin for the binary.

Please use AWS Free Tier prior to commencing with this article.

–>
Go in to the dev directory/link located within your home directory:

$ cd ~/dev

Grab Terraform:

$ wget https://releases.hashicorp.com/terraform/0.12.21/terraform_0.12.21_linux_amd64.zip

Install Unzip if you do not have it installed:

$ sudo apt update && sudo apt -y install unzip

Unzip it to ~/.local/bin and set permissions accordingly on it (type y and hit enter to replace if upgrading, at the prompt):

$ unzip terraform_0.12.21_linux_amd64.zip -d ~/.local/bin && chmod 754 ~/.local/bin/terraform

Create a work folder and change in to it:

$ mkdir -p terraform/aws/myweb/scripts && cd terraform/aws/myweb

Add an IAM Policy to the container-admin group so it will have access to EC2 and related (EIP/VPC/Routes/IGW/Route 53/SG/KeyPair):
AWS UI Console -> Services -> Security, Identity, & Compliance -> IAM -> Policies -> Create Policy -> JSON (replace <AWS ACCOUNT ID> in the Resource arn with your Account’s ID (shown under the top right drop-down (of your name) within the My Account page next to the Account Id: under Account Settings)):

 {
     "Version": "2012-10-17",
     "Statement": [
         {
             "Effect": "Allow",
             "Action": [
                 "ec2:TerminateInstances",
                 "route53:GetChange",
                 "route53:GetHostedZone",
                 "route53:ChangeTagsForResource",
                 "route53:DeleteHostedZone",
                 "route53:ListTagsForResource" 
             ],
             "Resource": [
                "arn:aws:ec2:*:<AWS ACCOUNT ID>:instance/*",
                "arn:aws:route53:::hostedzone/*",
                "arn:aws:route53:::change/*"
             ]      
         },
         {
             "Effect": "Allow",
             "Action": [
                 "ec2:DisassociateAddress",
                 "ec2:DeleteSubnet",
                 "ec2:DescribeAddresses",
                 "ec2:DescribeInstances",
                 "ec2:DescribeInstanceAttribute",
                 "ec2:CreateVpc",
                 "ec2:AttachInternetGateway",
                 "ec2:DescribeVpcAttribute",
                 "ec2:AssociateRouteTable",
                 "ec2:DescribeInternetGateways",
                 "ec2:DescribeNetworkInterfaces",
                 "ec2:CreateInternetGateway",
                 "ec2:CreateSecurityGroup",
                 "ec2:DescribeVolumes",
                 "ec2:DescribeAccountAttributes",
                 "ec2:ModifyVpcAttribute",
                 "ec2:DescribeKeyPairs",
                 "ec2:DescribeNetworkAcls",
                 "ec2:DescribeRouteTables",
                 "ec2:ReleaseAddress",
                 "ec2:ImportKeyPair",
                 "ec2:DescribeTags",
                 "ec2:DescribeVpcClassicLinkDnsSupport",
                 "ec2:CreateRouteTable",
                 "ec2:DetachInternetGateway",
                 "ec2:DisassociateRouteTable",
                 "ec2:AllocateAddress",
                 "ec2:DescribeInstanceCreditSpecifications",
                 "ec2:DescribeSecurityGroups",
                 "ec2:DescribeVpcClassicLink",
                 "ec2:DescribeImages",
                 "ec2:DescribeVpcs",
                 "ec2:DeleteVpc",
                 "ec2:AssociateAddress",
                 "ec2:CreateSubnet",
                 "ec2:DescribeSubnets",
                 "ec2:DeleteKeyPair",
                 "route53:CreateHostedZone",
                 "sts:GetCallerIdentity"
             ],
             "Resource": "*"
         }
     ]
 }

Review Policy ->

Name: AllowEC2
Description: Allow access to EC2 and related.

Create Policy.

Groups -> container-admin -> Attach Policy -> Search for AllowEC2 -> Attach Policy.

–>
Note: If you are not using Lightsail then you can disregard this section.

Edit the IAM Policy “AllowLightsail” to add an allowance to GetKeyPair in Lightsail:
AWS UI Console -> Services -> Security, Identity, & Compliance -> IAM -> Policies -> AllowLightsail -> Edit Policy -> JSON ->

Append lightsail:GetKeyPair after lightsail:DeleteKeyPair and before lightsail:GetInstance.

It will look like this:

                  "lightsail:DeleteKeyPair",
                  "lightsail:GetKeyPair",
                  "lightsail:GetInstance",

Review Policy -> Save Changes
<–

Generate an SSH Key Pair (no password) and restrict permissions on it:

$ ssh-keygen -q -t rsa -b 2048 -N '' -f ~/.ssh/myweb && chmod 400 ~/.ssh/myweb

Pin the Terraform version to greater then or equal to 0.12:

$ cat << 'EOF' > versions.tf
> terraform {
>   required_version = ">= 0.12"
> }
> EOF

Set the version to greater then or equal to 2.0 for the AWS provider, interpolate the region and use the AWS CLI credentials file:

$ cat << 'EOF' > provider.tf
> provider "aws" {
>   version                 = ">= 2.0"
>
>   region                  = var.region
>   shared_credentials_file = "~/.aws/credentials"
>   profile                 = "default"
> }
> EOF

Set the default region as a variable, set prefix of myweb and set lightsail by default to true:

$ cat << 'EOF' > vars.tf
> variable "region" {
>   default = "us-east-1"
> }
>
> variable "prefix" {
>   default = "myweb"
> }
>
> variable "lightsail" {
>   default = true
> }
> EOF

While we are here, let us create a new Lightsail script/code (if you have completed the previous AWS/Terraform against Lightsail article, then please overwrite). This will execute if no override (‘lightsail = false’) is passed. This also adds our public key as authorized (as oppose to uploading it manually as in the previous article):

$ cat << 'EOF' > lightsail.tf
> # Create a DNS Zone
> resource "aws_lightsail_domain" "myweb" {
>   count       = var.lightsail ? 1 : 0
>   domain_name = "${var.prefix}.com"
> }
>
> # Allocate a Static (Public) IP
> resource "aws_lightsail_static_ip" "myweb" {
>   count = var.lightsail ? 1 : 0
>   name  = "static-ip_${var.prefix}"
> }
>
> # Add Public Key as authorized
> resource "aws_lightsail_key_pair" "myweb" {
>   count      = var.lightsail ? 1 : 0
>   name       = var.prefix
>   public_key = file("~/.ssh/${var.prefix}.pub")
> }
>
> # Create an Ubuntu Virtual Machine with key based access and run a script on boot
> resource "aws_lightsail_instance" "myweb" {
>   count             = var.lightsail ? 1 : 0
>   name              = "site_${var.prefix}"
>   availability_zone = "${var.region}a"
>   blueprint_id      = "ubuntu_18_04"
>   bundle_id         = "micro_2_0"
>   key_pair_name     = var.prefix
>   user_data         = file("scripts/install.sh")
>
>   tags = {
>         Site = "${var.prefix}.com"
>     }
> }
>
> # Attach the Static (Public) IP
> resource "aws_lightsail_static_ip_attachment" "myweb" {
>   count          = var.lightsail ? 1 : 0
>   static_ip_name = element(aws_lightsail_static_ip.myweb[*].name, 0)
>   instance_name  = element(aws_lightsail_instance.myweb[*].name, 0)
> }
> EOF

Note: The below adds a conditional to accommodate the previous implementation against Lightsail. It will get executed when ‘lightsail = false’ is passed.

The following is performed with this script/code:

  • create a Route 53 DNS Zone of myweb.com (no A records will be added)
  • create a Virtual Private Cloud for network 10.0.0.0/16 (tenancy is default)
  • add a subnet of 10.0.1.0/24 within the VPC
  • allocate a static Public IP
  • create a Security Group and add a Security rule for allowing SSH (port 22) Inbound
  • create an Internet Gateway and add a route out to it
  • create a T3a.micro instance (tenancy is default) based off of Ubuntu 18_04, our public key added as authorized and reference an extraneous file for user_data (initialization script on Virtual Machine boot). Elastic/Root Block Store is GP2
  • DNS support is enabled but DNS host names is not
  • tag all resources

Note: vpc_security_group_ids is used as oppose to security_groups, as the latter would cause a destroy/create of the instance every time an apply is performed:

$ cat << 'EOF' > ec2.tf
> # Create a DNS Zone 
> resource "aws_route53_zone" "myweb" {
>   count   = var.lightsail ? 0 : 1
>   name    = "${var.prefix}.com"
>   comment = "${var.prefix}.com (Public)"
>
>   tags = {
>         Site = "${var.prefix}.com"
>         Name = "${var.prefix}-dn"
>     }
> }
>
> # Create a Security Group and allow inbound port(s)
> resource "aws_security_group" "myweb" {
>   count       = var.lightsail ? 0 : 1
>   name        = var.prefix
>   description = "Allow Ports"
>   vpc_id      = element(aws_vpc.myweb[*].id, 0)
>
>   ingress {
>        from_port   = 22
>        to_port     = 22
>        protocol    = "tcp"
>        cidr_blocks = ["0.0.0.0/0"]
>        description = "SSH"
>     }
>
>   egress {
>        from_port   = 0
>        to_port     = 0
>        protocol    = "-1"
>        cidr_blocks = ["0.0.0.0/0"]
>        description = "All"
>     }
>
>   tags = {
>         Site = "${var.prefix}.com"
>         Name = "${var.prefix}-sg"
>     }
> }
>
> # Create a Virtual Private Cloud
> resource "aws_vpc" "myweb" {
>   count            = var.lightsail ? 0 : 1
>   cidr_block       = "10.0.0.0/16"
>   instance_tenancy = "default"
>
>   tags = {
>         Site = "${var.prefix}.com"
>         Name = "${var.prefix}-vpc"
>     }
> }
>
> # Add a Subnet
> resource "aws_subnet" "internal" {
>   count             = var.lightsail ? 0 : 1
>   vpc_id            = element(aws_vpc.myweb[*].id, 0)
>   cidr_block        = "10.0.1.0/24"
>   availability_zone = "${var.region}a"
>
>   tags = {
>         Site = "${var.prefix}.com"
>         Name = "internal"
>     }
> }
>
> # Create an Internet Gateway
> resource "aws_internet_gateway" "myweb" {
>   count  = var.lightsail ? 0 : 1
>   vpc_id = element(aws_vpc.myweb[*].id, 0)
>
>   tags = {
>         Site = "${var.prefix}.com"
>         Name = "${var.prefix}-igw"
>     }
> }
>
> # Allocate a Static Public IP
> resource "aws_eip" "external" {
>   count             = var.lightsail ? 0 : 1
>   vpc               = true
>   instance          = element(aws_instance.myweb[*].id, 0)
>   depends_on        = [aws_internet_gateway.myweb]
>
>   tags = {
>         Site = "${var.prefix}.com"
>         Name = "external"
>     }
> }
>
> # Add a route to the Internet Gateway
> resource "aws_route_table" "myweb" {
>   count  = var.lightsail ? 0 : 1
>   vpc_id = element(aws_vpc.myweb[*].id, 0)
>
>   route {
>        cidr_block = "0.0.0.0/0"
>        gateway_id = element(aws_internet_gateway.myweb[*].id, 0)
>     }
>
>   tags = {
>         Site = "${var.prefix}.com"
>         Name = "${var.prefix}-rt"
>     }
> }
>
> # Associate the route table with the Subnet
> resource "aws_route_table_association" "myweb" {
>   count          = var.lightsail ? 0 : 1
>   subnet_id      = element(aws_subnet.internal[*].id, 0)
>   route_table_id = element(aws_route_table.myweb[*].id, 0)
> }
>
> # Add Public Key as authorized
> resource "aws_key_pair" "myweb" {
>   count      = var.lightsail ? 0 : 1
>   key_name   = var.prefix
>   public_key = file("~/.ssh/${var.prefix}.pub")
> }
>
> # Select Ubuntu 18.04
> data "aws_ami" "ubuntu" {
>   count       = var.lightsail ? 0 : 1
>   most_recent = true
>
>   filter {
>         name   = "name"
>         values = ["ubuntu/images/hvm-ssd/ubuntu-bionic-18.04-amd64-server-*"]
>     }
>
>   filter {
>         name   = "virtualization-type"
>         values = ["hvm"]
>     }
>
>   owners = ["099720109477"] # Canonical
> }
>
> # Create an Ubuntu Virtual Machine with key based access and run a script on boot
> resource "aws_instance" "myweb" {
>   count                    = var.lightsail ? 0 : 1
>   ami                      = element(data.aws_ami.ubuntu[*].id, 0)
>   instance_type            = "t3a.micro"
>   availability_zone        = "${var.region}a"
>   key_name                 = var.prefix
>   vpc_security_group_ids   = [element(concat(aws_security_group.myweb[*].id, list("")), 0)]
>   user_data                = file("scripts/install.sh")
>   subnet_id                = element(aws_subnet.internal[*].id, 0)
>   tenancy                  = "default"
>
>   tags = {
>         Site = "${var.prefix}.com"
>         Name = "${var.prefix}-ec2"
>     }
> } 
> EOF

Output our allocated and attached static Public IP after creation. Also output an inventory file to the Ansible workarea for later consumption and accommodate our previous Lightsail implementation:

$ cat << 'EOF' > output.tf
> output "static_public_ip" {
>   value = var.lightsail ? element(aws_lightsail_static_ip.myweb[*].ip_address, 0) : element(aws_eip.external[*].public_ip, 0)
> }
>
> resource "local_file" "hosts" {
>   content              = "[vps]\n${var.lightsail ? element(aws_lightsail_static_ip.myweb[*].ip_address, 0) : element(aws_eip.external[*].public_ip, 0)} ansible_connection=ssh ansible_user=ubuntu ansible_ssh_private_key_file=~/.ssh/${var.prefix} instance=${var.lightsail ? element(aws_lightsail_instance.myweb[*].name, 0) : element(aws_instance.myweb[*].tags["Name"], 0)}"
>   filename             = pathexpand("~/dev/ansible/hosts-aws")
>   directory_permission = 0754
>   file_permission      = 0664
> }
> EOF

If you have gone through the AWS/Terraform against Lightsail article, then please delete the template file for the install script:

$ rm install.tf

Create the shell script for user_data:

$ cat << 'EOF' > scripts/install.sh
> #!/bin/bash
>
> MY_HOME="/home/ubuntu"
> export DEBIAN_FRONTEND=noninteractive
>
> # Install prereqs
> apt update
> apt install -y python3-pip apt-transport-https ca-certificates curl software-properties-common
> # Install docker
> curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
> add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
> apt update
> apt install -y docker-ce
> # Install docker-compose
> su ubuntu -c "mkdir -p $MY_HOME/.local/bin" 
> su ubuntu -c "pip3 install docker-compose --upgrade --user && chmod 754 $MY_HOME/.local/bin/docker-compose"
> usermod -aG docker ubuntu
> # Add PATH
> printf "\nexport PATH=\$PATH:$MY_HOME/.local/bin\n" >> $MY_HOME/.bashrc
>
> exit 0
> EOF

Initialize the directory:

$ terraform init

Run a dry-run to see what will occur:

$ terraform plan -var 'lightsail=false'

Provision:

$ terraform apply -var 'lightsail=false' -auto-approve

Log on to the instance after a short while:

$ ssh -i ~/.ssh/myweb ubuntu@<The value of static_public_ip that was reported.  One can also use 'terraform output static_public_ip' to print it again.>

Type yes and hit enter to accept.

On the host (a short while is needed for the run-once script to complete):

$ docker --version
$ docker-compose --version
$ logout

Tear down what was created by first performing a dry-run to see what will occur:

$ terraform plan -var 'lightsail=false' -destroy 

Tear down the instance:

$ terraform destroy -var 'lightsail=false' -auto-approve

<–

References:

Source:
terraform_aws_myweb

Azure/Terraform/Ansible/OpenShift – Provision a Virtual Machine instance and further configure it using Infrastructure as Code

Note: This article has been duplicated from the previous article which uses AWS and has been modified for Azure. It also unifies inside Ansible, delineates vendor inside Terraform and makes some other miscellaneous amendments.

In this article we will Provision an Azure host with docker/docker-compose using Terraform and install/initialize OpenShift Origin on it using Ansible.

OpenShift is Red Hat’s containerization platform which utilizes Kubernetes. Origin (what we will be working with here) is the opensource implementation of it.

We will use ‘myweb’ as an example in this article, using the same base path of ‘dev’ that was previously created, and the container-admin Service Principal.

Please ensure you have gone through the previous Terraform, Ansible, related preceding articles and ‘Create your Azure free account today’ .

Go in to the dev directory/link located within your home directory:

$ cd ~/dev

–>

Create an aws directory inside the Terraform work area and move myweb in to it (you can disregard this if you haven’t went through the Terraform for AWS Lightsail article):

$ mkdir terraform/aws && mv terraform/myweb terraform/aws

While we are here, let us modify the output location reference of the hosts file within myweb on Terraform for LightSail (you can disregard this if you haven’t went through the Terraform for AWS Lightsail article):

$ sed -i s:\"\${path.module}/../../ansible/hosts\":pathexpand\(\"~/dev/ansible/hosts-aws\"\): terraform/aws/myweb/output.tf

Merge the Azure folder contents in the Ansible work area back in to the root path and remove the folder.

Note: If you went through the Ansible for AWS Lightsail article then you wouldn’t need to make the directory, copy the hosts file and scripts folder (you can disregard the .yml move if you haven’t went through it):

$ mkdir ansible/myweb
$ cp -p ansible/azure/myweb/hosts ansible/myweb
$ cp -pr ansible/azure/myweb/rbac ansible/myweb
$ cp -pr ansible/azure/myweb/scripts ansible/myweb
$ cp -p ansible/azure/myweb/vm.yml ansible/myweb/azure_vm.yml
$ mv ansible/myweb/lightsail.yml ansible/myweb/aws_lightsail.yml
$ rm -r ansible/azure

<–

Change to the myweb directory inside terraform/azure:

$ cd terraform/azure/myweb

Let us make two changes to the script/code:

  • remove the template file and change the sourcing of the initialization/boot script to in-line
  • change our instance from a Basic_A1 to a Standard_B2S size so it will have sufficient resources to run OpenShift Origin and related
$ rm install.tf
$ sed -i 's:data.template_file.init_script.rendered:file("scripts/install.sh"):; s:Basic_A1:Standard_B2S:' vm.tf

Output the Public IP of the Provisioned host (along with connection parameters and variables) in to a file which we will feed in to an Ansible playbook run:

$ cat << 'EOF' >> output.tf
>
> resource "local_file" "hosts" {
>   content              = "[vps]\n${azurerm_public_ip.external.ip_address} ansible_connection=ssh ansible_user=ubuntu ansible_ssh_private_key_file=~/.ssh/${var.prefix} instance=${azurerm_virtual_machine.myweb.name} instance_rg=${azurerm_resource_group.myweb.name} instance_nsg=${azurerm_network_security_group.myweb.name}"
>   filename             = pathexpand("~/dev/ansible/hosts-azure")
>   directory_permission = 0754
>   file_permission      = 0664
> }
> EOF  

Terraform, when operating in a sub-shell, doesn’t delete the local hosts file (used for Ansible) on destroy, so let’s delete it when this is performed:

$ sed -i "s:terraform \$\*):terraform \$\* \&\& { [[ \$* =~ ^(destroy) \&\& \$? -eq 0 ]] \&\& rm -f \$HOME/dev/ansible/hosts-azure; }):" ~/.bashrc

Source it in:

$ . ~/.bashrc

Initialize the directory/refresh module(s):

$ terraform init

Run a dry-run to see what will occur:

$ terraform-az-sp plan

Provision:

$ terraform-az-sp apply -auto-approve

Create a work folder for an Ansible playbook:

$ cd ../../../ansible
$ mkdir -p openshift/scripts && cd openshift

Create an Ansible playbook which will install/initialize OpenShift Origin on our provisioned host.

Note: This is a unified playbook which accommodates our previous implementation against AWS Lightsail and uses the extra variable(s) in the hosts file to condition:

$ cat << 'EOF' > openshift.yml 
> # Install, initialize OpenShift Origin and create a destroy routine for it
> # This is a unified setup against AWS Lightsail and Microsoft Azure VM
> ---
> - hosts: vps
>   connection: local
>
>   vars:
>     network_security_group: "{{ hostvars[groups['vps'][0]].instance_nsg }}"
>     instance: "{{ hostvars[groups['vps'][0]].instance }}"
>     resource_group: "{{ hostvars[groups['vps'][0]].instance_rg }}"
>     openshift_directory: /home/ubuntu/.local/etc/openshift
>     ansible_python_interpreter: /usr/bin/python3
>
>   tasks:
>     - name: Discover Services
>       service_facts:
>
>     - name: Check if openshift directory exists
>       stat:
>         path: "{{ openshift_directory }}"
>       register: openshift_dir
>       tags: [ 'destroy' ]
>
>     - name: Open Firewall Ports
>       delegate_to: localhost
>       args:
>         executable: /bin/bash
>       script: "./scripts/firewall.sh open {{ instance }}"
>       when:
>         - "'instance_nsg' not in hostvars[groups['vps'][0]]" 
>         - "'docker' in services"
>         - openshift_dir.stat.exists == False
>
>     - name: Add Network Security Group rules
>       delegate_to: localhost
>       azure_rm_securitygroup:
>         name: "{{ network_security_group }}"
>         resource_group: "{{ resource_group }}"
>         rules:
>           - name: OpenShift-Tcp
>             priority: 1002
>             direction: Inbound
>             access: Allow
>             protocol: Tcp
>             source_port_range: "*"
>             destination_port_range:
>               - 80
>               - 443
>               - 1936
>               - 4001
>               - 7001
>               - 8443
>               - 10250-10259
>             source_address_prefix: "*"
>             destination_address_prefix: "*"
>           - name: OpenShift-Udp
>             priority: 1003
>             direction: Inbound
>             access: Allow
>             protocol: Udp
>             source_port_range: "*"
>             destination_port_range:
>               - 53
>               - 8053
>             source_address_prefix: "*"
>             destination_address_prefix: "*"
>         state: present
>       when:
>         - "'instance_nsg' in hostvars[groups['vps'][0]]"
>         - "'docker' in services"
>         - openshift_dir.stat.exists == False
>
>     - name: Copy and Run install
>       environment:
>         PATH: "{{ ansible_env.PATH}}:{{ openshift_directory }}/../../bin"
>       args:
>         executable: /bin/bash
>       script: "./scripts/install.sh {{ ansible_ssh_host }}"
>       when:
>         - "'docker' in services"
>         - openshift_dir.stat.exists == False
>
>     - debug: msg="Please install docker to proceed."
>       when: "'docker' not in services"
>
>     - debug: msg="Install script has already been completed.  Run this playbook with the destroy tag, then run once again normally to re-intialize openshift."
>       when: openshift_dir.stat.exists == True
>
>     - name: Destroy
>       become: yes
>       environment:
>         PATH: "{{ ansible_env.PATH }}:{{ openshift_directory }}/../../bin"
>       args:
>         executable: /bin/bash
>       shell:
>         "cd {{ openshift_directory }} && oc cluster down && cd ../ && rm -rf {{ openshift_directory }}/../../../.kube {{ openshift_directory }}"
>       when: openshift_dir.stat.exists == True
>       tags: [ 'never', 'destroy' ]
>
>     - name: Close Firewall Ports
>       delegate_to: localhost
>       args:
>         executable: /bin/bash
>       script: "./scripts/firewall.sh close {{ instance }}"
>       when: "'instance_nsg' not in hostvars[groups['vps'][0]]"
>       tags: [ 'never', 'destroy' ]
>
>     - name: Delete Network Security Group rules
>       delegate_to: localhost
>       command:
>         bash -ic "az-login-sp && (az network nsg rule delete -g {{ resource_group }} --nsg-name {{ network_security_group }} -n {{ item }})"
>       with_items:
>         - OpenShift-Tcp
>         - OpenShift-Udp
>       when: "'instance_nsg' in hostvars[groups['vps'][0]]"
>       tags: [ 'never', 'destroy' ]

Create a shell script which will pull the latest release of client tools from GitHub, place the needed binaries in ~/.local/bin, set insecure registry on Docker and initialize:

$ cat << 'EOF' > scripts/install.sh
> #!/bin/bash
> [[ -z $* ]] && { echo "Please specify a Public IP or Host/Domain name." && exit 1; }
> # Fetch and Install
> file_url="$(curl -sL https://github.com/openshift/origin/releases/latest | grep "download.*client.*linux-64" | cut -f2 -d\" | sed 's/^/https:\/\/github.com/')"
> [[ -z $file_url ]] && { echo "The URL could not be obtained.  Please try again shortly." && exit 1; }
> file_name="$(echo $file_url | cut -f9 -d/)"
> if [[ ! -f $file_name ]]; then
>         curl -sL $file_url --output $file_name
>         folder_name="$(tar ztf $file_name 2>/dev/null | head -1 | sed s:/.*::)"
>         [[ -z $folder_name ]] && { echo "The archive could not be read.  Please try again." && rm -f $file_name && exit 1; }
>         tar zxf $file_name
>         mv $folder_name/oc $folder_name/kubectl $HOME/.local/bin && rm -r $folder_name
>         chmod 754 $HOME/.local/bin/oc $HOME/.local/bin/kubectl
> fi
> # Docker insecure
> [[ $(grep insecure /etc/docker/daemon.json &>/dev/null; echo $?) -eq 2 ]] && redirect=">"
> [[ $(grep insecure /etc/docker/daemon.json &>/dev/null; echo $?) -eq 1 ]] && redirect=">>"
> [[ $(grep insecure /etc/docker/daemon.json &>/dev/null; echo $?) -eq 0 ]] || { sudo bash -c "cat << 'EOF' $redirect /etc/docker/daemon.json
> {
>         \"insecure-registries\" : [ \"172.30.0.0/16\" ]
> }
> EOF" && sudo systemctl restart docker; }
> # OpenShift Origin up
> [[ ! -d $HOME/.local/etc/openshift ]] && { mkdir -p $HOME/.local/etc/openshift && cd $HOME/.local/etc/openshift; } || { cd $HOME/.local/etc/openshift && oc cluster down; }
> oc cluster up --public-hostname=$1
>
> exit 0
> EOF 

Run the Ansible playbook after a few minutes (accept the host key by typing yes and hitting enter when prompted):

$ ansible-playbook -i ../hosts-azure openshift.yml

After a short while, log on to the instance:

$ ssh -i ~/.ssh/myweb ubuntu@<The value of static_public_ip that was reported.  One can also use 'terraform output static_public_ip' to print it again.>

To get an overview of the current project with any identified issues:

$ oc status --suggest

Log on as Admin via CMD Line and switch to the default project:

$ oc login -u system:admin -n default

Logout of the session:

$ oc logout

Please see the Command-Line Walkthrough.

Logout from the host:

$ logout

Log on as Admin via Web Browser (replace <PUBLIC_IP>):

https://<PUBLIC_IP>:8443/console (You will get a Certificate/Site warning due to a mismatch).

Please see the Web Console Walkthrough.

To shut down the OpenShift Origin cluster, destroy the working folder and start anew (you can re-run the playbook normally to reinitialize):

$ ansible-playbook -i ../hosts-azure openshift.yml --tags "destroy"

Tear down what was created by first performing a dry-run to see what will occur:

$ cd ../../terraform/azure/myweb && terraform-az-sp plan -destroy 

Tear down the instance:

$ terraform-az-sp destroy -auto-approve

Destroy the Network Watcher Resource Group that was automatically created (if not found prior), if you do not have other virtual networks in the region which are using it (you can use either option below).

If you have not went through the Azure/Ansible VM creation article:

$ az-login-sp
$ az group delete -n NetworkWatcherRG --yes
$ az logout

If you have went through the Azure/Ansible VM article, created the playbook and have made the unification modification (the below is all on one line):

$ playbook_dir="$HOME/dev/ansible/myweb" && ansible-playbook -i $playbook_dir/hosts $playbook_dir/azure_vm.yml --tags "destroy_networkwatcher" && unset playbook_dir

<–

References:
how-to-install-openshift-origin-on-ubuntu-18-04

Source:
ansible_openshift

Azure/Ansible – Provision a Virtual Machine instance using Infrastructure as Code

Note: This article has been duplicated from the previous article which uses Terraform and has been modified for Ansible.

In this article we will use Ansible (Infrastructure as Code) to swiftly bring up a Microsoft Azure Virtual Machine instance in East US on a static IP, add a DNS Zone for the site in mention and install docker/docker-compose on it.

We will use ‘myweb’ as an example in this article, using the same base path of ‘dev’ that was previously created and the container-admin Service Principal.

Please use ‘Create your Azure free account today’ prior to commencing with this article.

–>
Go in to the dev directory/link located within your home directory:

$ cd ~/dev

Upgrade the Azure CLI on your host:

$ sudo apt update && sudo apt -y upgrade azure-cli

Update PIP:

$ python3 -m pip install --upgrade --user pip

If there was an update, then forget remembered location references in the shell environment:

$ hash -r pip 

Install/Upgrade Ansible:

$ pip3 install ansible --upgrade --user && chmod 754 ~/.local/bin/ansible ~/.local/bin/ansible-playbook

Install the Ansible Azure modules (this may take a while):

$ pip3 install 'ansible[azure]' --upgrade --user

Modify the profile and the key/variable strings in the previously created Azure credentials file:

$ sed -i 's/\[container-admin/\[default/; s/application_id/client_id/; s/client_secret/secret/; s/directory_id/tenant/' ~/.azure/credentials

Note: The above change will break the previously created terraform-az-sp function. If you are also using Terraform, then please do this (user’s startup):

$ sed -i 's:application_id/arm_:client_id/arm_:; s:client_secret/arm_:secret/arm_:; s:directory_id/arm_:tenant/arm_:' ~/.bashrc

Remove the subscription_id and modify the keys/variables in our previously created az-login-sp function (user’s startup).

If you have not gone through the Azure/Terraform article:

$ sed -i "s:\$HOME/.azure/credentials | xargs):\$HOME/.azure/credentials | sed '/subscription_id/d; s/client_id/application_id/; s/secret/client_secret/; s/tenant/directory_id/' | xargs):" ~/.bashrc

If you have gone through the Azure/Terraform article:

$ sed -i "s:\$HOME/.azure/credentials | sed '/subscription_id/d':\$HOME/.azure/credentials | sed '/subscription_id/d; s/client_id/application_id/; s/secret/client_secret/; s/tenant/directory_id/':" ~/.bashrc

Source it in:

$ . ~/.bashrc

Add the <SUBSCRIPTION ID> (UI Console -> Azure Active Directory -> Search for (top left): Subscriptions -> Click your subscription -> Overview) in to the Azure credentials file (replace <SUBSCRIPTION ID>).

Note: This can be omitted if you have gone through the Azure/Terraform article:

$ echo "subscription_id=<SUBSCRIPTION ID>" >> ~/.azure/credentials 

In the Subscription, add roles to container-admin:
Access control (IAM) -> Role assignments ->

Add (Add role assignment) -> Role: DNS Zone Contributor -> Assign access to: Azure AD user, group, or service principal -> Select: container-admin -> Save

Add (Add role assignment) -> Role: Network Contributor -> Assign access to: Azure AD user, group, or service principal -> Select: container-admin -> Save

Add (Add role assignment) -> Role: Virtual Machine Contributor -> Assign access to: Azure AD user, group, or service principal -> Select: container-admin -> Save

Create a work folder and change in to it:

$ mkdir -p ansible/azure/myweb/scripts ansible/azure/myweb/rbac && cd ansible/azure/myweb

Create a custom Role Based Access for Resource Groups so container-admin can Read, Write and Delete Resource Groups in the subscription (replace <SUBSCRIPTION ID>):

$ cat << 'EOF' > rbac/rg-custom.jsn
> {
>    "Name": "Resource Group Allowance",
>    "IsCustom": true,
>    "Description": "Can read, write and delete Resource Groups.",
>    "Actions": [
>       "Microsoft.Resources/subscriptions/resourceGroups/read",
>       "Microsoft.Resources/subscriptions/resourceGroups/write",
>       "Microsoft.Resources/subscriptions/resourceGroups/delete"
>    ],
>    "NotActions": [],
>    "AssignableScopes": [
>       "/subscriptions/<SUBSCRIPTION ID>"
>    ]
> }
> EOF

Authenticate to Azure using the CLI with the same Administrative credentials you use in the UI (a browser window will popup requesting credentials):

$ az login

Create the Role Definition:

$ az role definition create --role-definition rbac/rg-custom.jsn

Add the role to container-admin:

$ az role assignment create --role "Resource Group Allowance" --assignee $(grep client_id ~/.azure/credentials | cut -f2 -d=) --subscription $(grep subscription_id ~/.azure/credentials | cut -f2 -d=)

Logout of the Azure CLI session:

$ az logout

Generate an SSH Key Pair (no password) and restrict permissions on it:

$ ssh-keygen -q -t rsa -b 2048 -N '' -f ~/.ssh/myweb && chmod 400 ~/.ssh/myweb

Create a hosts file and specify localhost:

$ cat << 'EOF' > hosts
> [local]
> localhost
> EOF

The following is performed with this Playbook:

  • create a resource group where all of our resources will be put in (within East US)
  • create an Azure DNS Zone of myweb.com (no A records will be created)
  • create a virtual network of 10.0.0.0/16
  • add a subnet of 10.0.1.0/24 within the VNET
  • allocate a static Public IP
  • create a Network Security Group and add a Security rule for allowing SSH (port 22) Inbound
  • create a Network Interface with a Dynamic Private IP
  • create a Basic A1 instance based off of Ubuntu 18_04 with a Standard SSD, password authentication turned off, our public key added as authorized and reference an extraneous file for custom_data (initialization script on Virtual Machine boot)
  • tag all resources
$ cat << 'EOF' > vm.yml
> # Create an Azure Virtual Machine instance and add a way to destroy it
> ---
> - hosts: local
>   connection: local
>
>   vars:
>     region: EastUS
>     prefix: myweb
>     subnet_name: internal
>     public_ip_name: external
>
>   tasks:
>   - name: Create a resource group
>     azure_rm_resourcegroup:
>       name: "{{ prefix }}-rg"
>       location: "{{ region }}"
>       state: present
>       tags:
>           Site: "{{ prefix }}.com"
>
>   - name: Create a DNS Zone
>     azure_rm_dnszone:
>       name: "{{ prefix }}.com"
>       resource_group: "{{ prefix }}-rg"
>       state: present
>       tags:
>           Site: "{{ prefix }}.com"
>
>   - name: Create a Virtual Network
>     azure_rm_virtualnetwork:
>       name: "{{ prefix }}-net"
>       address_prefixes: "10.0.0.0/16"
>       location: "{{ region }}"
>       resource_group: "{{ prefix }}-rg"
>       state: present
>       tags:
>           Site: "{{ prefix }}.com"
>
>   - name: Add a Subnet
>     azure_rm_subnet:
>       name: "{{ subnet_name }}"
>       resource_group: "{{ prefix }}-rg"
>       virtual_network: "{{ prefix }}-net"
>       address_prefix: "10.0.1.0/24"
>       state: present
>
>   - name: Allocate a Static Public IP
>     azure_rm_publicipaddress:
>       name: "{{ public_ip_name }}"
>       location: "{{ region }}"
>       resource_group: "{{ prefix }}-rg"
>       allocation_method: Static
>       state: present
>       tags:
>           Site: "{{ prefix }}.com"
>     register: static_public_ip
>
>   - name: Create a Network Security Group and allow inbound port(s)
>     azure_rm_securitygroup:
>       name: "{{ prefix }}-nsg"
>       location: "{{ region }}"
>       resource_group: "{{ prefix }}-rg"
>       rules:
>         - name: SSH
>           priority: 1001
>           direction: Inbound
>           access: Allow
>           protocol: Tcp
>           source_port_range: "*"
>           destination_port_range: 22
>           source_address_prefix: "*"
>           destination_address_prefix: "*"
>       state: present
>       tags:
>           Site: "{{ prefix }}.com"
> 
>   - name: Create a Network Interface with a Dynamic Private IP
>     azure_rm_networkinterface:
>       name: "{{ prefix }}-nic"
>       location: "{{ region }}"
>       resource_group: "{{ prefix }}-rg"
>       security_group: "{{ prefix }}-nsg"
>       virtual_network: "{{ prefix }}-net"
>       subnet_name: "{{ subnet_name }}"
>       ip_configurations:
>         - name: "{{ prefix }}-nic_conf"
>           private_ip_allocation_method: Dynamic
>           public_ip_address_name: "{{ public_ip_name }}"
>           primary: True
>       state: present
>       tags:
>           Site: "{{ prefix }}.com"
>
>   - name: Create an Ubuntu Virtual Machine with key based access and run a script on boot; use a Standard SSD
>     azure_rm_virtualmachine:
>       name: "{{ prefix }}-vm"
>       location: "{{ region }}"
>       resource_group: "{{ prefix }}-rg"
>       network_interfaces: "{{ prefix }}-nic"
>       vm_size: Basic_A1
?       image:
>         publisher: Canonical
>         offer: UbuntuServer
>         sku: '18.04-LTS'
>         version: latest
>       os_type: Linux
>       os_disk_name: "{{ prefix }}-disk"
>       os_disk_caching: ReadWrite
>       managed_disk_type: StandardSSD_LRS
>       short_hostname: "{{ prefix }}"
>       admin_username: ubuntu
>       custom_data: "{{ lookup('file', './scripts/install.sh') }}"
>       ssh_password_enabled: false
>       ssh_public_keys:
>             - path: /home/ubuntu/.ssh/authorized_keys
>               key_data: "{{ lookup('file', '~/.ssh/{{ prefix }}.pub') }}"
>       state: present
>       tags:
>           'Site': "{{ prefix }}.com"
>
>   - debug: msg="Public (static) IP is {{ static_public_ip.state.ip_address }} for {{ azure_vm.name }}"
>     when: static_public_ip.state.ip_address is defined
>
>   - debug: msg="Run this playbook for {{ azure_vm.name }} shortly to list the Public (static) IP."
>     when: static_public_ip.state.ip_address is not defined
>
>   - name: Destroy a Resource Group and all resources that fall under it
>     azure_rm_resourcegroup:
>       name: "{{ prefix }}-rg"
>       force_delete_nonempty: yes
>       state: absent
>     tags: [ 'never', 'destroy' ]
>
>   - name: Destroy the Network Watcher Resource Group and all resources that fall under it
>     azure_rm_resourcegroup:
>       name: "NetworkWatcherRG"
>       force_delete_nonempty: yes
>       state: absent
>     tags: [ 'never', 'destroy_networkwatcher' ]
> EOF

Create the shell script for custom_data:

$ cat << 'EOF' > scripts/install.sh
> #!/bin/bash
>
> MY_HOME="/home/ubuntu"
> export DEBIAN_FRONTEND=noninteractive
>
> # Install prereqs
> apt update
> apt install -y python3-pip apt-transport-https ca-certificates curl software-properties-common
> # Install docker
> curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
> add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
> apt update
> apt install -y docker-ce
> # Install docker-compose
> su ubuntu -c "mkdir -p $MY_HOME/.local/bin"
> su ubuntu -c "pip3 install docker-compose --upgrade --user && chmod 754 $MY_HOME/.local/bin/docker-compose"
> usermod -aG docker ubuntu
> # Add PATH
> printf "\nexport PATH=\$PATH:$MY_HOME/.local/bin\n" >> $MY_HOME/.bashrc
>
> exit 0
> EOF

Run the playbook:

$ ansible-playbook -i hosts vm.yml

Log on to the instance after a short while:

$ ssh -i ~/.ssh/myweb ubuntu@<The value of static_public_ip that was reported. One can also re-run the playbook to print it again.>

Type yes and hit enter to accept.

On the host (a short while is needed for the boot initialization script to complete):

$ docker --version
$ docker-compose --version
$ logout

Tear down the instance:

$ ansible-playbook -i hosts vm.yml --tags "destroy"

Destroy the Network Watcher Resource Group that was automatically created (if not found prior), if you do not have other virtual networks in the region which are using it:

$  ansible-playbook -i hosts vm.yml --tags "destroy_networkwatcher" 

<–

References:

Source:
ansible_myweb

Azure/Terraform – Provision a Virtual Machine instance using Infrastructure as Code

In this article we will use Terraform (Infrastructure as Code) to swiftly bring up a Microsoft Azure Virtual Machine instance in East US on a static IP, add a DNS Zone for the site in mention and install docker/docker-compose on it.

We will use ‘myweb’ as an example in this article, using the same base path of ‘dev’ that was previously created and the container-admin Service Principal.

Please use ‘Create your Azure free account today’ prior to commencing with this article.

–>
Go in to the dev directory/link located within your home directory:

$ cd ~/dev

Upgrade the Azure CLI on your host:

$ sudo apt update && sudo apt -y upgrade azure-cli

Grab Terraform:

$ wget https://releases.hashicorp.com/terraform/0.12.20/terraform_0.12.20_linux_amd64.zip

Install Unzip if you do not have it installed:

$ sudo apt -y install unzip

Unzip it to ~/.local/bin and set permissions accordingly on it:

$ unzip terraform_0.12.20_linux_amd64.zip -d ~/.local/bin && chmod 754 ~/.local/bin/terraform

Add this function in to your user’s startup to parse the previously created credentials file and pass pertinent login information as a servicePrincipal to Terraform, in a sub-shell:

$ cat << 'EOF' >> ~/.bashrc
>
> function terraform-az-sp() {
>         (export $(grep -v '^\[' $HOME/.azure/credentials | sed 's/application_id/arm_client_id/; s/client_secret/arm_client_secret/; s/directory_id/arm_tenant_id/; s/subscription_id/arm_subscription_id/; s/^[^=]*/\U&\E/' | xargs) && terraform $*)
> }
> EOF

Remove the subscription_id in our previously created az-login-sp function (user’s startup):

$ sed -i "s:\$HOME/.azure/credentials | xargs):\$HOME/.azure/credentials | sed '/subscription_id/d' | xargs):" ~/.bashrc

Source it in:

$ . ~/.bashrc

Add the <SUBSCRIPTION ID> (UI Console -> Azure Active Directory -> Search for (top left): Subscriptions -> Click your subscription -> Overview) in to the Azure credentials file (replace <SUBSCRIPTION ID>):

$ echo "subscription_id=<SUBSCRIPTION ID>" >> ~/.azure/credentials 

In the Subscription, add roles to container-admin:
Access control (IAM) -> Role assignments ->

Add (Add role assignment) -> Role: DNS Zone Contributor -> Assign access to: Azure AD user, group, or service principal -> Select: container-admin -> Save

Add (Add role assignment) -> Role: Network Contributor -> Assign access to: Azure AD user, group, or service principal -> Select: container-admin -> Save

Add (Add role assignment) -> Role: Virtual Machine Contributor -> Assign access to: Azure AD user, group, or service principal -> Select: container-admin -> Save

Create a work folder and change in to it:

$ mkdir -p terraform/azure/myweb/scripts terraform/azure/myweb/rbac && cd terraform/azure/myweb

Create a custom Role Based Access for Resource Groups so container-admin can Read, Write and Delete Resource Groups in the subscription (replace <SUBSCRIPTION ID>):

$ cat << 'EOF' > rbac/rg-custom.jsn
> {
>    "Name": "Resource Group Allowance",
>    "IsCustom": true,
>    "Description": "Can read, write and delete Resource Groups.",
>    "Actions": [
>       "Microsoft.Resources/subscriptions/resourceGroups/read",
>       "Microsoft.Resources/subscriptions/resourceGroups/write",
>       "Microsoft.Resources/subscriptions/resourceGroups/delete"
>    ],
>    "NotActions": [],
>    "AssignableScopes": [
>       "/subscriptions/<SUBSCRIPTION ID>"
>    ]
> }
> EOF

Authenticate to Azure using the CLI with the same Administrative credentials you use in the UI (a browser window will popup requesting credentials):

$ az login

Create the Role Definition:

$ az role definition create --role-definition rbac/rg-custom.jsn

Add the role to container-admin:

$ az role assignment create --role "Resource Group Allowance" --assignee $(grep application_id ~/.azure/credentials | cut -f2 -d=) --subscription $(grep subscription_id ~/.azure/credentials | cut -f2 -d=)

Generate an SSH Key Pair (no password) and restrict permissions on it:

$ ssh-keygen -q -t rsa -b 2048 -N '' -f ~/.ssh/myweb && chmod 400 ~/.ssh/myweb

Ensure the terraform version is greater then or equal to 0.12:

$ cat << 'EOF' > versions.tf
> terraform {
>   required_version = ">= 0.12"
> }
> EOF

Set the version for the AzureRM Provider to greater then or equal to 1.44:

$ cat << 'EOF' > provider.tf
> provider "azurerm" {
>   version = ">= 1.44"
> }
> EOF

Set the default region and prefix variable:

$ cat << 'EOF' > vars.tf
> variable "region" {
>   default = "EastUS"
> }
>
> variable "prefix" {
>   default = "myweb"
> }
> EOF

The following is performed with this script/code:

  • create a resource group where all of our resources will be put in (within East US)
  • create an Azure DNS Zone of myweb.com (no A records will be created)
  • create a virtual network of 10.0.0.0/16
  • add a subnet of 10.0.1.0/24 within the VNET
  • allocate a static Public IP
  • create a Network Security Group and add a Security rule for allowing SSH (port 22) Inbound
  • create a Network Interface with a Dynamic Private IP
  • create a Basic A1 instance based off of Ubuntu 18_04 with a Standard SSD, password authentication turned off, our public key added as authorized and reference an extraneous file for custom_data (initialization script on Virtual Machine boot)
  • tag all resources
$ cat << 'EOF' > vm.tf
> # Create a Resource Group
> resource "azurerm_resource_group" "myweb" {
>   name     = "${var.prefix}-rg"
>   location = var.region
>
>   tags = {
>         Site = "${var.prefix}.com"
>     }
> }
>
> # Create a DNS Zone
> resource "azurerm_dns_zone" "myweb" {
>   name                = "${var.prefix}.com"
>   resource_group_name = azurerm_resource_group.myweb.name
>
>   tags = {
>         Site = "${var.prefix}.com"
>     }
> }
>
> # Create a Virtual Network
> resource "azurerm_virtual_network" "myweb" {
>   name                = "${var.prefix}-net"
>   address_space       = ["10.0.0.0/16"]
>   location            = azurerm_resource_group.myweb.location
>   resource_group_name = azurerm_resource_group.myweb.name
>
>   tags = {
>         Site = "${var.prefix}.com"
>     }
> }
>
> # Add a Subnet
> resource "azurerm_subnet" "internal" {
>   name                 = "internal"
>   resource_group_name  = azurerm_resource_group.myweb.name
>   virtual_network_name = azurerm_virtual_network.myweb.name
>   address_prefix       = "10.0.1.0/24"
> }
>
> # Allocate a Static Public IP
> resource "azurerm_public_ip" "external" {
>   name                = "external"
>   location            = var.region
>   resource_group_name = azurerm_resource_group.myweb.name
>   allocation_method   = "Static"
>
>   tags = {
>         Site = "${var.prefix}.com"
>     }
> }
>
> # Create a Network Security Group and allow inbound port(s)
> resource "azurerm_network_security_group" "myweb" {
>   name                = "${var.prefix}-nsg"
>   location            = var.region
>   resource_group_name = azurerm_resource_group.myweb.name
>
>   security_rule {
>         name                       = "SSH"
>         priority                   = 1001
>         direction                  = "Inbound"
>         access                     = "Allow"
>         protocol                   = "Tcp"
>         source_port_range          = "*"
>         destination_port_range     = "22"
>         source_address_prefix      = "*"
>         destination_address_prefix = "*"
>     }
>
>   tags = {
>         Site = "${var.prefix}.com"
>     }
> }
>
> # Create a Network Interface with a Dynamic Private IP
> resource "azurerm_network_interface" "myweb" {
>   name                      = "${var.prefix}-nic"
>   location                  = azurerm_resource_group.myweb.location
>   resource_group_name       = azurerm_resource_group.myweb.name
>   network_security_group_id = azurerm_network_security_group.myweb.id
>
>   ip_configuration {
>        name                          = "${var.prefix}-nic_conf"
>        subnet_id                     = azurerm_subnet.internal.id
>        private_ip_address_allocation = "Dynamic"
>        public_ip_address_id          = azurerm_public_ip.external.id
>     }
>
>   tags = {
>         Site = "${var.prefix}.com"
>     }
> }
>
> # Create an Ubuntu Virtual Machine with key based access and run a script on boot; use a Standard SSD
> resource "azurerm_virtual_machine" "myweb" {
>   name                  = "${var.prefix}-vm"
>   location              = azurerm_resource_group.myweb.location
>   resource_group_name   = azurerm_resource_group.myweb.name
>   network_interface_ids = [azurerm_network_interface.myweb.id]
>   vm_size               = "Basic_A1"
>
>   storage_image_reference {
>       publisher = "Canonical"
>       offer     = "UbuntuServer"
>       sku       = "18.04-LTS"
>       version   = "latest"
>     }
>
>   storage_os_disk {
>       name              = "${var.prefix}-disk"
>       caching           = "ReadWrite"
>       create_option     = "FromImage"
>       managed_disk_type = "StandardSSD_LRS"
>     }
>
>   os_profile {
>       computer_name  = var.prefix
>       admin_username = "ubuntu"
>       custom_data    = data.template_file.init_script.rendered
>     }
>
>   os_profile_linux_config {
>         disable_password_authentication = true
>         ssh_keys {
>             path     = "/home/ubuntu/.ssh/authorized_keys"
>             key_data = file("~/.ssh/${var.prefix}.pub")
>           }
>     }
>
>   tags = {
>         Site = "${var.prefix}.com"
>     }
> }
> EOF

Output our allocated static Public IP after creation:

$ cat << 'EOF' > output.tf
> output "static_public_ip" {
>   value = azurerm_public_ip.external.ip_address
> }
> EOF

Create a template file to reference a boot initialization script:

$ cat << 'EOF' > install.tf
> data "template_file" "init_script" {
>   template = "${file("scripts/install.sh")}"
> }
> EOF

Create the shell script for custom_data:

$ cat << 'EOF' > scripts/install.sh
> #!/bin/bash
>
> MY_HOME="/home/ubuntu"
> export DEBIAN_FRONTEND=noninteractive
>
> # Install prereqs
> apt update
> apt install -y python3-pip apt-transport-https ca-certificates curl software-properties-common
> # Install docker
> curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
> add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
> apt update
> apt install -y docker-ce
> # Install docker-compose
> su ubuntu -c "mkdir -p $MY_HOME/.local/bin" 
> su ubuntu -c "pip3 install docker-compose --upgrade --user && chmod 754 $MY_HOME/.local/bin/docker-compose"
> usermod -aG docker ubuntu
> # Add PATH
> printf "\nexport PATH=\$PATH:$MY_HOME/.local/bin\n" >> $MY_HOME/.bashrc
>
> exit 0
> EOF

Initialize the directory:

$ terraform init

Run a dry-run to see what will occur:

$ terraform-az-sp plan

Provision:

$ terraform-az-sp apply -auto-approve

Log on to the instance after a short while:

$ ssh -i ~/.ssh/myweb ubuntu@<The value of static_public_ip that was reported.  One can also use 'terraform-az-sp output static_public_ip' to print it again.>

Type yes and hit enter to accept.

On the host (a short while is needed for the boot initialization script to complete):

$ docker --version
$ docker-compose --version
$ logout

Tear down what was created by first performing a dry-run to see what will occur:

$ terraform-az-sp plan -destroy 

Tear down the instance:

$ terraform-az-sp destroy -auto-approve

Destroy the Network Watcher Resource Group that was automatically created (if not found prior), if you do not have other virtual networks in the region which are using it:

$ az group delete -n NetworkWatcherRG --yes

Logout of the Azure CLI session:

$ az logout

<–

References:

Source:
terraform_azure_myweb