In this article we will Provision a LightSail host with docker/docker-compose on it using Terraform and install/initialize OpenShift Origin on it using Ansible.
OpenShift is Red Hat’s containerization platform which utilizes Kubernetes. Origin (what we will be working with here) is the opensource implementation of it.
We will use ‘myweb’ as an example in this article, using the same base path of ‘dev’ that was previously created, the container-admin group and using ~/.local/bin for the binaries.
Please ensure you have gone through the previous Terraform, Ansible, related preceding articles and ‘Get started with Lightsail for free’.
–>
Edit the IAM Policy “AllowLightsail”:
AWS UI Console -> Services -> Security, Identity, & Compliance -> IAM -> Policies -> AllowLightsail -> Edit Policy -> JSON ->
Append lightsail:OpenInstancePublicPorts and lightsail:CloseInstancePublicPorts after lightsail:ReleaseStaticIp.
It will look like this:
"lightsail:ReleaseStaticIp", "lightsail:OpenInstancePublicPorts", "lightsail:CloseInstancePublicPorts"
Review Policy -> Save Changes
Go in to the dev directory/link located within your home directory:
$ cd ~/dev
Upgrade the AWS CLI on your host:
$ pip3 install awscli --upgrade --user && chmod 754 ~/.local/bin/aws
Grab the latest version of Terraform:
$ wget https://releases.hashicorp.com/terraform/0.12.20/terraform_0.12.20_linux_amd64.zip
Unzip it to ~/.local/bin and set permissions accordingly on it:
$ unzip terraform_0.12.20_linux_amd64.zip -d ~/.local/bin && chmod 754 ~/.local/bin/terraform
Change to the myweb directory inside terraform:
$ cd terraform/myweb
Upgrade our code to 12.20 (type yes and enter when prompted):
$ terraform 0.12upgrade
Change our instance from a micro to a medium so it will have sufficient resources to run OpenShift Origin and related:
$ sed -i s:micro_2_0:medium_2_0: lightsail.tf
Output the Public IP of the Provisioned host (along with connection parameters and variables) in to a file which we will feed in to an Ansible playbook run:
$ cat << 'EOF' >> output.tf > > resource "local_file" "hosts" { > content = "[vps]\n${aws_lightsail_static_ip.myweb.ip_address} ansible_connection=ssh ansible_user=ubuntu ansible_ssh_private_key_file=~/.ssh/myweb instance=${aws_lightsail_instance.myweb.name}" > filename = "${path.module}/../../ansible/hosts" > directory_permission = 0754 > file_permission = 0664 > } > EOF
Amend an item from the user_data script:
$ sed -i 's:sudo apt-key add -:apt-key add -:' scripts/install.sh
Initialize the directory/refresh module(s):
$ terraform init
Run a dry-run to see what will occur:
$ terraform plan
Provision:
$ terraform apply -auto-approve
Create a work folder for an Ansible playbook:
$ cd ../../ansible $ mkdir -p openshift/scripts && cd openshift
Create an Ansible playbook which will install/initialize OpenShift Origin on our provisioned host:
$ cat << 'EOF' > openshift.yml > # Install, initialize OpenShift Origin and create a destroy routine for it > --- > - hosts: vps > connection: local > > vars: > openshift_directory: /home/ubuntu/.local/etc/openshift > > tasks: > - name: Discover Services > service_facts: > > - name: Check if openshift directory exists > stat: > path: "{{ openshift_directory }}" > register: openshift_dir > > - name: Open Firewall Ports > delegate_to: localhost > command: bash -c './scripts/firewall.sh open {{ hostvars[groups['vps'][0]].instance }}' > when: > - "'docker' in services" > - openshift_dir.stat.exists == False > > - name: Copy and Run install > environment: > PATH: "{{ ansible_env.PATH}}:{{ openshift_directory }}/../../bin" > args: > executable: /bin/bash > script: "./scripts/install.sh {{ ansible_ssh_host }}" > when: > - "'docker' in services" > - openshift_dir.stat.exists == False > > - debug: msg="Please install docker to proceed." > when: "'docker' not in services" > > - debug: msg="Install script has already been completed. Run this playbook with the destroy tag, then run once again normally to re-intialize openshift." > when: openshift_dir.stat.exists == True > > - name: Destroy > become: yes > environment: > PATH: "{{ ansible_env.PATH}}:{{ openshift_directory }}/../../bin" > args: > executable: /bin/bash > shell: "cd {{ openshift_directory }} && oc cluster down && cd ../ && rm -r {{ openshift_directory }}/../../../.kube {{ openshift_directory }}" > tags: [ 'never', 'destroy' ] > > - name: Close Firewall Ports > delegate_to: localhost > command: bash -c './scripts/firewall.sh close {{ hostvars[groups['vps'][0]].instance }}' > tags: [ 'never', 'destroy' ] > EOF
Create a shell script which will pull the latest release of client tools from GitHub, place the needed binaries in ~/.local/bin, set insecure registry on Docker and initialize:
$ cat << 'EOF' > scripts/install.sh > #!/bin/bash > [[ -z $* ]] && { echo "Please specify a Public IP or Host/Domain name." && exit 1; } > # Fetch and Install > file_url="$(curl -sL https://github.com/openshift/origin/releases/latest | grep "download.*client.*linux-64" | cut -f2 -d\" | sed 's/^/https:\/\/github.com/')" > [[ -z $file_url ]] && { echo "The URL could not be obtained. Please try again shortly." && exit 1; } > file_name="$(echo $file_url | cut -f9 -d/)" > if [[ ! -f $file_name ]]; then > curl -sL $file_url --output $file_name > folder_name="$(tar ztf $file_name 2>/dev/null | head -1 | sed s:/.*::)" > [[ -z $folder_name ]] && { echo "The archive could not be read. Please try again." && rm -f $file_name && exit 1; } > tar zxf $file_name > mv $folder_name/oc $folder_name/kubectl $HOME/.local/bin && rm -r $folder_name > chmod 754 $HOME/.local/bin/oc $HOME/.local/bin/kubectl > fi > # Docker insecure > [[ $(grep insecure /etc/docker/daemon.json &>/dev/null; echo $?) -eq 2 ]] && redirect=">" > [[ $(grep insecure /etc/docker/daemon.json &>/dev/null; echo $?) -eq 1 ]] && redirect=">>" > [[ $(grep insecure /etc/docker/daemon.json &>/dev/null; echo $?) -eq 0 ]] || { sudo bash -c "cat << 'EOF' $redirect /etc/docker/daemon.json > { > \"insecure-registries\" : [ \"172.30.0.0/16\" ] > } > EOF" && sudo systemctl restart docker; } > # OpenShift Origin up > [[ ! -d $HOME/.local/etc/openshift ]] && { mkdir -p $HOME/.local/etc/openshift && cd $HOME/.local/etc/openshift; } || { cd $HOME/.local/etc/openshift && oc cluster down; } > oc cluster up --public-hostname=$1 > > exit 0 > EOF
The Lightsail firewall functionality is currently being implemented in Terraform and is not available in Ansible. In the interim, we will create a shell script to open and close ports needed by OpenShift Origin (using the AWS CLI). This script will be run locally via the Playbook during the create and destroy routines:
$ cat << 'EOF' > scripts/firewall.sh && chmod 754 scripts/firewall.sh > #!/bin/bash > # > openshift_ports="53/UDP 443/TCP 1936/TCP 4001/TCP 7001/TCP 8053/UDP 8443/TCP 10250_10259/TCP" > # > [[ -z $* || $(echo $* | xargs -n1 | wc -l) -ne 2 || ! ($* =~ $(echo '\<open\>') || $* =~ $(echo '\<close\>')) ]] && { echo "Please pass in the desired action [ open, close ] and instance [ site_myweb ]." && exit 2; } > # > instance="$(echo $* | xargs -n1 | sed '/\<open\>/d; /\<close\>/d')" > [[ -z $instance ]] && { echo "Please double-check the passed in instance." && exit 1; } > action="$(echo $* | xargs -n1 | grep -v $instance)" > # > for port in $openshift_ports; do > aws lightsail $action-instance-public-ports --instance $instance --port-info fromPort=$(echo $port | cut -f1 -d_ | cut -f1 -d/),protocol=$(echo $port | cut -f2 -d/),toPort=$(echo $port | cut -f2 -d_ | cut -f1 -d/) > done > # > > exit 0 > EOF
Run the Ansible playbook after a few minutes (accept the host key by typing yes and hitting enter when prompted):
$ ansible-playbook -i ../hosts openshift.yml
After a short while, log on to the instance:
$ ssh -i ~/.ssh/myweb ubuntu@<The value of static_public_ip that was reported. One can also use 'terraform output static_public_ip' to print it again.>
To get an overview of the current project with any identified issues:
$ oc status --suggest
Log on as Admin via CMD Line and switch to the default project:
$ oc login -u system:admin -n default
Logout of the session:
$ oc logout
Please see the Command-Line Walkthrough.
Logout from the host:
$ logout
Log on as Admin via Web Browser (replace <PUBLIC_IP>):
https://<PUBLIC_IP>:8443/console (You will get a Certificate/Site warning due to a mismatch).
Please see the Web Console Walkthrough.
To shut down the OpenShift Origin cluster, destroy the working folder and start anew (you can re-run the playbook normally to reinitialize):
$ ansible-playbook -i ../hosts openshift.yml --tags "destroy"
Tear down what was created by first performing a dry-run to see what will occur:
$ cd ../../terraform/myweb && terraform plan -destroy
Tear down the instance:
$ terraform destroy -auto-approve
<–
References:
how-to-install-openshift-origin-on-ubuntu-18-04
Source:
ansible_openshift