Note: This article has been duplicated from the previous article which uses AWS and has been modified for Azure. It also unifies inside Ansible, delineates vendor inside Terraform and makes some other miscellaneous amendments.

In this article we will Provision an Azure host with docker/docker-compose using Terraform and install/initialize OpenShift Origin on it using Ansible.

OpenShift is Red Hat’s containerization platform which utilizes Kubernetes. Origin (what we will be working with here) is the opensource implementation of it.

We will use ‘myweb’ as an example in this article, using the same base path of ‘dev’ that was previously created, and the container-admin Service Principal.

Please ensure you have gone through the previous Terraform, Ansible, related preceding articles and ‘Create your Azure free account today’ .

Go in to the dev directory/link located within your home directory:

$ cd ~/dev

–>

Create an aws directory inside the Terraform work area and move myweb in to it (you can disregard this if you haven’t went through the Terraform for AWS Lightsail article):

$ mkdir terraform/aws && mv terraform/myweb terraform/aws

While we are here, let us modify the output location reference of the hosts file within myweb on Terraform for LightSail (you can disregard this if you haven’t went through the Terraform for AWS Lightsail article):

$ sed -i s:\"\${path.module}/../../ansible/hosts\":pathexpand\(\"~/dev/ansible/hosts-aws\"\): terraform/aws/myweb/output.tf

Merge the Azure folder contents in the Ansible work area back in to the root path and remove the folder.

Note: If you went through the Ansible for AWS Lightsail article then you wouldn’t need to make the directory, copy the hosts file and scripts folder (you can disregard the .yml move if you haven’t went through it):

$ mkdir ansible/myweb
$ cp -p ansible/azure/myweb/hosts ansible/myweb
$ cp -pr ansible/azure/myweb/rbac ansible/myweb
$ cp -pr ansible/azure/myweb/scripts ansible/myweb
$ cp -p ansible/azure/myweb/vm.yml ansible/myweb/azure_vm.yml
$ mv ansible/myweb/lightsail.yml ansible/myweb/aws_lightsail.yml
$ rm -r ansible/azure

<–

Change to the myweb directory inside terraform/azure:

$ cd terraform/azure/myweb

Let us make two changes to the script/code:

  • remove the template file and change the sourcing of the initialization/boot script to in-line
  • change our instance from a Basic_A1 to a Standard_B2S size so it will have sufficient resources to run OpenShift Origin and related
$ rm install.tf
$ sed -i 's:data.template_file.init_script.rendered:file("scripts/install.sh"):; s:Basic_A1:Standard_B2S:' vm.tf

Output the Public IP of the Provisioned host (along with connection parameters and variables) in to a file which we will feed in to an Ansible playbook run:

$ cat << 'EOF' >> output.tf
>
> resource "local_file" "hosts" {
>   content              = "[vps]\n${azurerm_public_ip.external.ip_address} ansible_connection=ssh ansible_user=ubuntu ansible_ssh_private_key_file=~/.ssh/${var.prefix} instance=${azurerm_virtual_machine.myweb.name} instance_rg=${azurerm_resource_group.myweb.name} instance_nsg=${azurerm_network_security_group.myweb.name}"
>   filename             = pathexpand("~/dev/ansible/hosts-azure")
>   directory_permission = 0754
>   file_permission      = 0664
> }
> EOF  

Terraform, when operating in a sub-shell, doesn’t delete the local hosts file (used for Ansible) on destroy, so let’s delete it when this is performed:

$ sed -i "s:terraform \$\*):terraform \$\* \&\& { [[ \$* =~ ^(destroy) \&\& \$? -eq 0 ]] \&\& rm -f \$HOME/dev/ansible/hosts-azure; }):" ~/.bashrc

Source it in:

$ . ~/.bashrc

Initialize the directory/refresh module(s):

$ terraform init

Run a dry-run to see what will occur:

$ terraform-az-sp plan

Provision:

$ terraform-az-sp apply -auto-approve

Create a work folder for an Ansible playbook:

$ cd ../../../ansible
$ mkdir -p openshift/scripts && cd openshift

Create an Ansible playbook which will install/initialize OpenShift Origin on our provisioned host.

Note: This is a unified playbook which accommodates our previous implementation against AWS Lightsail and uses the extra variable(s) in the hosts file to condition:

$ cat << 'EOF' > openshift.yml 
> # Install, initialize OpenShift Origin and create a destroy routine for it
> # This is a unified setup against AWS Lightsail and Microsoft Azure VM
> ---
> - hosts: vps
>   connection: local
>
>   vars:
>     network_security_group: "{{ hostvars[groups['vps'][0]].instance_nsg }}"
>     instance: "{{ hostvars[groups['vps'][0]].instance }}"
>     resource_group: "{{ hostvars[groups['vps'][0]].instance_rg }}"
>     openshift_directory: /home/ubuntu/.local/etc/openshift
>     ansible_python_interpreter: /usr/bin/python3
>
>   tasks:
>     - name: Discover Services
>       service_facts:
>
>     - name: Check if openshift directory exists
>       stat:
>         path: "{{ openshift_directory }}"
>       register: openshift_dir
>       tags: [ 'destroy' ]
>
>     - name: Open Firewall Ports
>       delegate_to: localhost
>       args:
>         executable: /bin/bash
>       script: "./scripts/firewall.sh open {{ instance }}"
>       when:
>         - "'instance_nsg' not in hostvars[groups['vps'][0]]" 
>         - "'docker' in services"
>         - openshift_dir.stat.exists == False
>
>     - name: Add Network Security Group rules
>       delegate_to: localhost
>       azure_rm_securitygroup:
>         name: "{{ network_security_group }}"
>         resource_group: "{{ resource_group }}"
>         rules:
>           - name: OpenShift-Tcp
>             priority: 1002
>             direction: Inbound
>             access: Allow
>             protocol: Tcp
>             source_port_range: "*"
>             destination_port_range:
>               - 80
>               - 443
>               - 1936
>               - 4001
>               - 7001
>               - 8443
>               - 10250-10259
>             source_address_prefix: "*"
>             destination_address_prefix: "*"
>           - name: OpenShift-Udp
>             priority: 1003
>             direction: Inbound
>             access: Allow
>             protocol: Udp
>             source_port_range: "*"
>             destination_port_range:
>               - 53
>               - 8053
>             source_address_prefix: "*"
>             destination_address_prefix: "*"
>         state: present
>       when:
>         - "'instance_nsg' in hostvars[groups['vps'][0]]"
>         - "'docker' in services"
>         - openshift_dir.stat.exists == False
>
>     - name: Copy and Run install
>       environment:
>         PATH: "{{ ansible_env.PATH}}:{{ openshift_directory }}/../../bin"
>       args:
>         executable: /bin/bash
>       script: "./scripts/install.sh {{ ansible_ssh_host }}"
>       when:
>         - "'docker' in services"
>         - openshift_dir.stat.exists == False
>
>     - debug: msg="Please install docker to proceed."
>       when: "'docker' not in services"
>
>     - debug: msg="Install script has already been completed.  Run this playbook with the destroy tag, then run once again normally to re-intialize openshift."
>       when: openshift_dir.stat.exists == True
>
>     - name: Destroy
>       become: yes
>       environment:
>         PATH: "{{ ansible_env.PATH }}:{{ openshift_directory }}/../../bin"
>       args:
>         executable: /bin/bash
>       shell:
>         "cd {{ openshift_directory }} && oc cluster down && cd ../ && rm -rf {{ openshift_directory }}/../../../.kube {{ openshift_directory }}"
>       when: openshift_dir.stat.exists == True
>       tags: [ 'never', 'destroy' ]
>
>     - name: Close Firewall Ports
>       delegate_to: localhost
>       args:
>         executable: /bin/bash
>       script: "./scripts/firewall.sh close {{ instance }}"
>       when: "'instance_nsg' not in hostvars[groups['vps'][0]]"
>       tags: [ 'never', 'destroy' ]
>
>     - name: Delete Network Security Group rules
>       delegate_to: localhost
>       command:
>         bash -ic "az-login-sp && (az network nsg rule delete -g {{ resource_group }} --nsg-name {{ network_security_group }} -n {{ item }})"
>       with_items:
>         - OpenShift-Tcp
>         - OpenShift-Udp
>       when: "'instance_nsg' in hostvars[groups['vps'][0]]"
>       tags: [ 'never', 'destroy' ]

Create a shell script which will pull the latest release of client tools from GitHub, place the needed binaries in ~/.local/bin, set insecure registry on Docker and initialize:

$ cat << 'EOF' > scripts/install.sh
> #!/bin/bash
> [[ -z $* ]] && { echo "Please specify a Public IP or Host/Domain name." && exit 1; }
> # Fetch and Install
> file_url="$(curl -sL https://github.com/openshift/origin/releases/latest | grep "download.*client.*linux-64" | cut -f2 -d\" | sed 's/^/https:\/\/github.com/')"
> [[ -z $file_url ]] && { echo "The URL could not be obtained.  Please try again shortly." && exit 1; }
> file_name="$(echo $file_url | cut -f9 -d/)"
> if [[ ! -f $file_name ]]; then
>         curl -sL $file_url --output $file_name
>         folder_name="$(tar ztf $file_name 2>/dev/null | head -1 | sed s:/.*::)"
>         [[ -z $folder_name ]] && { echo "The archive could not be read.  Please try again." && rm -f $file_name && exit 1; }
>         tar zxf $file_name
>         mv $folder_name/oc $folder_name/kubectl $HOME/.local/bin && rm -r $folder_name
>         chmod 754 $HOME/.local/bin/oc $HOME/.local/bin/kubectl
> fi
> # Docker insecure
> [[ $(grep insecure /etc/docker/daemon.json &>/dev/null; echo $?) -eq 2 ]] && redirect=">"
> [[ $(grep insecure /etc/docker/daemon.json &>/dev/null; echo $?) -eq 1 ]] && redirect=">>"
> [[ $(grep insecure /etc/docker/daemon.json &>/dev/null; echo $?) -eq 0 ]] || { sudo bash -c "cat << 'EOF' $redirect /etc/docker/daemon.json
> {
>         \"insecure-registries\" : [ \"172.30.0.0/16\" ]
> }
> EOF" && sudo systemctl restart docker; }
> # OpenShift Origin up
> [[ ! -d $HOME/.local/etc/openshift ]] && { mkdir -p $HOME/.local/etc/openshift && cd $HOME/.local/etc/openshift; } || { cd $HOME/.local/etc/openshift && oc cluster down; }
> oc cluster up --public-hostname=$1
>
> exit 0
> EOF 

Run the Ansible playbook after a few minutes (accept the host key by typing yes and hitting enter when prompted):

$ ansible-playbook -i ../hosts-azure openshift.yml

After a short while, log on to the instance:

$ ssh -i ~/.ssh/myweb ubuntu@<The value of static_public_ip that was reported.  One can also use 'terraform output static_public_ip' to print it again.>

To get an overview of the current project with any identified issues:

$ oc status --suggest

Log on as Admin via CMD Line and switch to the default project:

$ oc login -u system:admin -n default

Logout of the session:

$ oc logout

Please see the Command-Line Walkthrough.

Logout from the host:

$ logout

Log on as Admin via Web Browser (replace <PUBLIC_IP>):

https://<PUBLIC_IP>:8443/console (You will get a Certificate/Site warning due to a mismatch).

Please see the Web Console Walkthrough.

To shut down the OpenShift Origin cluster, destroy the working folder and start anew (you can re-run the playbook normally to reinitialize):

$ ansible-playbook -i ../hosts-azure openshift.yml --tags "destroy"

Tear down what was created by first performing a dry-run to see what will occur:

$ cd ../../terraform/azure/myweb && terraform-az-sp plan -destroy 

Tear down the instance:

$ terraform-az-sp destroy -auto-approve

Destroy the Network Watcher Resource Group that was automatically created (if not found prior), if you do not have other virtual networks in the region which are using it (you can use either option below).

If you have not went through the Azure/Ansible VM creation article:

$ az-login-sp
$ az group delete -n NetworkWatcherRG --yes
$ az logout

If you have went through the Azure/Ansible VM article, created the playbook and have made the unification modification (the below is all on one line):

$ playbook_dir="$HOME/dev/ansible/myweb" && ansible-playbook -i $playbook_dir/hosts $playbook_dir/azure_vm.yml --tags "destroy_networkwatcher" && unset playbook_dir

<–

References:
how-to-install-openshift-origin-on-ubuntu-18-04

Source:
ansible_openshift

« »