Latest Entries »

Azure/Terraform/Ansible/OpenShift – Provision a Virtual Machine instance and further configure it using Infrastructure as Code

Note: This article has been duplicated from the previous article which uses AWS and has been modified for Azure. It also unifies inside Ansible, delineates vendor inside Terraform and makes some other miscellaneous amendments.

In this article we will Provision an Azure host with docker/docker-compose using Terraform and install/initialize OpenShift Origin on it using Ansible.

OpenShift is Red Hat’s containerization platform which utilizes Kubernetes. Origin (what we will be working with here) is the opensource implementation of it.

We will use ‘myweb’ as an example in this article, using the same base path of ‘dev’ that was previously created, and the container-admin Service Principal.

Please ensure you have gone through the previous Terraform, Ansible, related preceding articles and ‘Create your Azure free account today’ .

Go in to the dev directory/link located within your home directory:

$ cd ~/dev

–>

Create an aws directory inside the Terraform work area and move myweb in to it (you can disregard this if you haven’t went through the Terraform for AWS Lightsail article):

$ mkdir terraform/aws && mv terraform/myweb terraform/aws

While we are here, let us modify the output location reference of the hosts file within myweb on Terraform for LightSail (you can disregard this if you haven’t went through the Terraform for AWS Lightsail article):

$ sed -i s:\"\${path.module}/../../ansible/hosts\":pathexpand\(\"~/dev/ansible/hosts-aws\"\): terraform/aws/myweb/output.tf

Merge the Azure folder contents in the Ansible work area back in to the root path and remove the folder.

Note: If you went through the Ansible for AWS Lightsail article then you wouldn’t need to make the directory, copy the hosts file and scripts folder (you can disregard the .yml move if you haven’t went through it):

$ mkdir ansible/myweb
$ cp -p ansible/azure/myweb/hosts ansible/myweb
$ cp -pr ansible/azure/myweb/rbac ansible/myweb
$ cp -pr ansible/azure/myweb/scripts ansible/myweb
$ cp -p ansible/azure/myweb/vm.yml ansible/myweb/azure_vm.yml
$ mv ansible/myweb/lightsail.yml ansible/myweb/aws_lightsail.yml
$ rm -r ansible/azure

<–

Change to the myweb directory inside terraform/azure:

$ cd terraform/azure/myweb

Let us make two changes to the script/code:

  • remove the template file and change the sourcing of the initialization/boot script to in-line
  • change our instance from a Basic_A1 to a Standard_B2S size so it will have sufficient resources to run OpenShift Origin and related
$ rm install.tf
$ sed -i 's:data.template_file.init_script.rendered:file("scripts/install.sh"):; s:Basic_A1:Standard_B2S:' vm.tf

Output the Public IP of the Provisioned host (along with connection parameters and variables) in to a file which we will feed in to an Ansible playbook run:

$ cat << 'EOF' >> output.tf
>
> resource "local_file" "hosts" {
>   content              = "[vps]\n${azurerm_public_ip.external.ip_address} ansible_connection=ssh ansible_user=ubuntu ansible_ssh_private_key_file=~/.ssh/${var.prefix} instance=${azurerm_virtual_machine.myweb.name} instance_rg=${azurerm_resource_group.myweb.name} instance_nsg=${azurerm_network_security_group.myweb.name}"
>   filename             = pathexpand("~/dev/ansible/hosts-azure")
>   directory_permission = 0754
>   file_permission      = 0664
> }
> EOF  

Terraform, when operating in a sub-shell, doesn’t delete the local hosts file (used for Ansible) on destroy, so let’s delete it when this is performed:

$ sed -i "s:terraform \$\*):terraform \$\* \&\& { [[ \$* =~ ^(destroy) \&\& \$? -eq 0 ]] \&\& rm -f \$HOME/dev/ansible/hosts-azure; }):" ~/.bashrc

Source it in:

$ . ~/.bashrc

Initialize the directory/refresh module(s):

$ terraform init

Run a dry-run to see what will occur:

$ terraform-az-sp plan

Provision:

$ terraform-az-sp apply -auto-approve

Create a work folder for an Ansible playbook:

$ cd ../../../ansible
$ mkdir -p openshift/scripts && cd openshift

Create an Ansible playbook which will install/initialize OpenShift Origin on our provisioned host.

Note: This is a unified playbook which accommodates our previous implementation against AWS Lightsail and uses the extra variable(s) in the hosts file to condition:

$ cat << 'EOF' > openshift.yml 
> # Install, initialize OpenShift Origin and create a destroy routine for it
> # This is a unified setup against AWS Lightsail and Microsoft Azure VM
> ---
> - hosts: vps
>   connection: local
>
>   vars:
>     network_security_group: "{{ hostvars[groups['vps'][0]].instance_nsg }}"
>     instance: "{{ hostvars[groups['vps'][0]].instance }}"
>     resource_group: "{{ hostvars[groups['vps'][0]].instance_rg }}"
>     openshift_directory: /home/ubuntu/.local/etc/openshift
>     ansible_python_interpreter: /usr/bin/python3
>
>   tasks:
>     - name: Discover Services
>       service_facts:
>
>     - name: Check if openshift directory exists
>       stat:
>         path: "{{ openshift_directory }}"
>       register: openshift_dir
>       tags: [ 'destroy' ]
>
>     - name: Open Firewall Ports
>       delegate_to: localhost
>       args:
>         executable: /bin/bash
>       script: "./scripts/firewall.sh open {{ instance }}"
>       when:
>         - "'instance_nsg' not in hostvars[groups['vps'][0]]" 
>         - "'docker' in services"
>         - openshift_dir.stat.exists == False
>
>     - name: Add Network Security Group rules
>       delegate_to: localhost
>       azure_rm_securitygroup:
>         name: "{{ network_security_group }}"
>         resource_group: "{{ resource_group }}"
>         rules:
>           - name: OpenShift-Tcp
>             priority: 1002
>             direction: Inbound
>             access: Allow
>             protocol: Tcp
>             source_port_range: "*"
>             destination_port_range:
>               - 80
>               - 443
>               - 1936
>               - 4001
>               - 7001
>               - 8443
>               - 10250-10259
>             source_address_prefix: "*"
>             destination_address_prefix: "*"
>           - name: OpenShift-Udp
>             priority: 1003
>             direction: Inbound
>             access: Allow
>             protocol: Udp
>             source_port_range: "*"
>             destination_port_range:
>               - 53
>               - 8053
>             source_address_prefix: "*"
>             destination_address_prefix: "*"
>         state: present
>       when:
>         - "'instance_nsg' in hostvars[groups['vps'][0]]"
>         - "'docker' in services"
>         - openshift_dir.stat.exists == False
>
>     - name: Copy and Run install
>       environment:
>         PATH: "{{ ansible_env.PATH}}:{{ openshift_directory }}/../../bin"
>       args:
>         executable: /bin/bash
>       script: "./scripts/install.sh {{ ansible_ssh_host }}"
>       when:
>         - "'docker' in services"
>         - openshift_dir.stat.exists == False
>
>     - debug: msg="Please install docker to proceed."
>       when: "'docker' not in services"
>
>     - debug: msg="Install script has already been completed.  Run this playbook with the destroy tag, then run once again normally to re-intialize openshift."
>       when: openshift_dir.stat.exists == True
>
>     - name: Destroy
>       become: yes
>       environment:
>         PATH: "{{ ansible_env.PATH }}:{{ openshift_directory }}/../../bin"
>       args:
>         executable: /bin/bash
>       shell:
>         "cd {{ openshift_directory }} && oc cluster down && cd ../ && rm -rf {{ openshift_directory }}/../../../.kube {{ openshift_directory }}"
>       when: openshift_dir.stat.exists == True
>       tags: [ 'never', 'destroy' ]
>
>     - name: Close Firewall Ports
>       delegate_to: localhost
>       args:
>         executable: /bin/bash
>       script: "./scripts/firewall.sh close {{ instance }}"
>       when: "'instance_nsg' not in hostvars[groups['vps'][0]]"
>       tags: [ 'never', 'destroy' ]
>
>     - name: Delete Network Security Group rules
>       delegate_to: localhost
>       command:
>         bash -ic "az-login-sp && (az network nsg rule delete -g {{ resource_group }} --nsg-name {{ network_security_group }} -n {{ item }})"
>       with_items:
>         - OpenShift-Tcp
>         - OpenShift-Udp
>       when: "'instance_nsg' in hostvars[groups['vps'][0]]"
>       tags: [ 'never', 'destroy' ]

Create a shell script which will pull the latest release of client tools from GitHub, place the needed binaries in ~/.local/bin, set insecure registry on Docker and initialize:

$ cat << 'EOF' > scripts/install.sh
> #!/bin/bash
> [[ -z $* ]] && { echo "Please specify a Public IP or Host/Domain name." && exit 1; }
> # Fetch and Install
> file_url="$(curl -sL https://github.com/openshift/origin/releases/latest | grep "download.*client.*linux-64" | cut -f2 -d\" | sed 's/^/https:\/\/github.com/')"
> [[ -z $file_url ]] && { echo "The URL could not be obtained.  Please try again shortly." && exit 1; }
> file_name="$(echo $file_url | cut -f9 -d/)"
> if [[ ! -f $file_name ]]; then
>         curl -sL $file_url --output $file_name
>         folder_name="$(tar ztf $file_name 2>/dev/null | head -1 | sed s:/.*::)"
>         [[ -z $folder_name ]] && { echo "The archive could not be read.  Please try again." && rm -f $file_name && exit 1; }
>         tar zxf $file_name
>         mv $folder_name/oc $folder_name/kubectl $HOME/.local/bin && rm -r $folder_name
>         chmod 754 $HOME/.local/bin/oc $HOME/.local/bin/kubectl
> fi
> # Docker insecure
> [[ $(grep insecure /etc/docker/daemon.json &>/dev/null; echo $?) -eq 2 ]] && redirect=">"
> [[ $(grep insecure /etc/docker/daemon.json &>/dev/null; echo $?) -eq 1 ]] && redirect=">>"
> [[ $(grep insecure /etc/docker/daemon.json &>/dev/null; echo $?) -eq 0 ]] || { sudo bash -c "cat << 'EOF' $redirect /etc/docker/daemon.json
> {
>         \"insecure-registries\" : [ \"172.30.0.0/16\" ]
> }
> EOF" && sudo systemctl restart docker; }
> # OpenShift Origin up
> [[ ! -d $HOME/.local/etc/openshift ]] && { mkdir -p $HOME/.local/etc/openshift && cd $HOME/.local/etc/openshift; } || { cd $HOME/.local/etc/openshift && oc cluster down; }
> oc cluster up --public-hostname=$1
>
> exit 0
> EOF 

Run the Ansible playbook after a few minutes (accept the host key by typing yes and hitting enter when prompted):

$ ansible-playbook -i ../hosts-azure openshift.yml

After a short while, log on to the instance:

$ ssh -i ~/.ssh/myweb ubuntu@<The value of static_public_ip that was reported.  One can also use 'terraform output static_public_ip' to print it again.>

To get an overview of the current project with any identified issues:

$ oc status --suggest

Log on as Admin via CMD Line and switch to the default project:

$ oc login -u system:admin -n default

Logout of the session:

$ oc logout

Please see the Command-Line Walkthrough.

Logout from the host:

$ logout

Log on as Admin via Web Browser (replace <PUBLIC_IP>):

https://<PUBLIC_IP>:8443/console (You will get a Certificate/Site warning due to a mismatch).

Please see the Web Console Walkthrough.

To shut down the OpenShift Origin cluster, destroy the working folder and start anew (you can re-run the playbook normally to reinitialize):

$ ansible-playbook -i ../hosts-azure openshift.yml --tags "destroy"

Tear down what was created by first performing a dry-run to see what will occur:

$ cd ../../terraform/azure/myweb && terraform-az-sp plan -destroy 

Tear down the instance:

$ terraform-az-sp destroy -auto-approve

Destroy the Network Watcher Resource Group that was automatically created (if not found prior), if you do not have other virtual networks in the region which are using it (you can use either option below).

If you have not went through the Azure/Ansible VM creation article:

$ az-login-sp
$ az group delete -n NetworkWatcherRG --yes
$ az logout

If you have went through the Azure/Ansible VM article, created the playbook and have made the unification modification (the below is all on one line):

$ playbook_dir="$HOME/dev/ansible/myweb" && ansible-playbook -i $playbook_dir/hosts $playbook_dir/azure_vm.yml --tags "destroy_networkwatcher" && unset playbook_dir

<–

References:
how-to-install-openshift-origin-on-ubuntu-18-04

Source:
ansible_openshift

Firmware – Asuswrt-Merlin (NG) – 384.16_alpha1 – RT-AC68

This is Merlin’s Asuswrt (NG) 384.16_alpha1 for the ASUS RT-AC68U/R.

-sync latest changes from RMerlin (master).

—–

Download (ASUS RT-AC68U/R):
RT-AC68U_384.16_alpha1.trx
Download: RT-AC68U_384.16_alpha1.trx

—–

Source:
https://github.com/pershoot/asuswrt-merlin.ng
https://github.com/RMerl/asuswrt-merlin.ng

——–

Installation instructions:

-Flash the .trx through the UI
-After it is completed and you are returned back to the UI, wait a short while (~30 seconds) then power cycle the router (with the on/off button).

Azure/Ansible – Provision a Virtual Machine instance using Infrastructure as Code

Note: This article has been duplicated from the previous article which uses Terraform and has been modified for Ansible.

In this article we will use Ansible (Infrastructure as Code) to swiftly bring up a Microsoft Azure Virtual Machine instance in East US on a static IP, add a DNS Zone for the site in mention and install docker/docker-compose on it.

We will use ‘myweb’ as an example in this article, using the same base path of ‘dev’ that was previously created and the container-admin Service Principal.

Please use ‘Create your Azure free account today’ prior to commencing with this article.

–>
Go in to the dev directory/link located within your home directory:

$ cd ~/dev

Upgrade the Azure CLI on your host:

$ sudo apt update && sudo apt -y upgrade azure-cli

Update PIP:

$ python3 -m pip install --upgrade --user pip

If there was an update, then forget remembered location references in the shell environment:

$ hash -r pip 

Install/Upgrade Ansible:

$ pip3 install ansible --upgrade --user && chmod 754 ~/.local/bin/ansible ~/.local/bin/ansible-playbook

Install the Ansible Azure modules (this may take a while):

$ pip3 install 'ansible[azure]' --upgrade --user

Modify the profile and the key/variable strings in the previously created Azure credentials file:

$ sed -i 's/\[container-admin/\[default/; s/application_id/client_id/; s/client_secret/secret/; s/directory_id/tenant/' ~/.azure/credentials

Note: The above change will break the previously created terraform-az-sp function. If you are also using Terraform, then please do this (user’s startup):

$ sed -i 's:application_id/arm_:client_id/arm_:; s:client_secret/arm_:secret/arm_:; s:directory_id/arm_:tenant/arm_:' ~/.bashrc

Remove the subscription_id and modify the keys/variables in our previously created az-login-sp function (user’s startup).

If you have not gone through the Azure/Terraform article:

$ sed -i "s:\$HOME/.azure/credentials | xargs):\$HOME/.azure/credentials | sed '/subscription_id/d; s/client_id/application_id/; s/secret/client_secret/; s/tenant/directory_id/' | xargs):" ~/.bashrc

If you have gone through the Azure/Terraform article:

$ sed -i "s:\$HOME/.azure/credentials | sed '/subscription_id/d':\$HOME/.azure/credentials | sed '/subscription_id/d; s/client_id/application_id/; s/secret/client_secret/; s/tenant/directory_id/':" ~/.bashrc

Source it in:

$ . ~/.bashrc

Add the <SUBSCRIPTION ID> (UI Console -> Azure Active Directory -> Search for (top left): Subscriptions -> Click your subscription -> Overview) in to the Azure credentials file (replace <SUBSCRIPTION ID>).

Note: This can be omitted if you have gone through the Azure/Terraform article:

$ echo "subscription_id=<SUBSCRIPTION ID>" >> ~/.azure/credentials 

In the Subscription, add roles to container-admin:
Access control (IAM) -> Role assignments ->

Add (Add role assignment) -> Role: DNS Zone Contributor -> Assign access to: Azure AD user, group, or service principal -> Select: container-admin -> Save

Add (Add role assignment) -> Role: Network Contributor -> Assign access to: Azure AD user, group, or service principal -> Select: container-admin -> Save

Add (Add role assignment) -> Role: Virtual Machine Contributor -> Assign access to: Azure AD user, group, or service principal -> Select: container-admin -> Save

Create a work folder and change in to it:

$ mkdir -p ansible/azure/myweb/scripts ansible/azure/myweb/rbac && cd ansible/azure/myweb

Create a custom Role Based Access for Resource Groups so container-admin can Read, Write and Delete Resource Groups in the subscription (replace <SUBSCRIPTION ID>):

$ cat << 'EOF' > rbac/rg-custom.jsn
> {
>    "Name": "Resource Group Allowance",
>    "IsCustom": true,
>    "Description": "Can read, write and delete Resource Groups.",
>    "Actions": [
>       "Microsoft.Resources/subscriptions/resourceGroups/read",
>       "Microsoft.Resources/subscriptions/resourceGroups/write",
>       "Microsoft.Resources/subscriptions/resourceGroups/delete"
>    ],
>    "NotActions": [],
>    "AssignableScopes": [
>       "/subscriptions/<SUBSCRIPTION ID>"
>    ]
> }
> EOF

Authenticate to Azure using the CLI with the same Administrative credentials you use in the UI (a browser window will popup requesting credentials):

$ az login

Create the Role Definition:

$ az role definition create --role-definition rbac/rg-custom.jsn

Add the role to container-admin:

$ az role assignment create --role "Resource Group Allowance" --assignee $(grep client_id ~/.azure/credentials | cut -f2 -d=) --subscription $(grep subscription_id ~/.azure/credentials | cut -f2 -d=)

Logout of the Azure CLI session:

$ az logout

Generate an SSH Key Pair (no password) and restrict permissions on it:

$ ssh-keygen -q -t rsa -b 2048 -N '' -f ~/.ssh/myweb && chmod 400 ~/.ssh/myweb

Create a hosts file and specify localhost:

$ cat << 'EOF' > hosts
> [local]
> localhost
> EOF

The following is performed with this Playbook:

  • create a resource group where all of our resources will be put in (within East US)
  • create an Azure DNS Zone of myweb.com (no A records will be created)
  • create a virtual network of 10.0.0.0/16
  • add a subnet of 10.0.1.0/24 within the VNET
  • allocate a static Public IP
  • create a Network Security Group and add a Security rule for allowing SSH (port 22) Inbound
  • create a Network Interface with a Dynamic Private IP
  • create a Basic A1 instance based off of Ubuntu 18_04 with a Standard SSD, password authentication turned off, our public key added as authorized and reference an extraneous file for custom_data (initialization script on Virtual Machine boot)
  • tag all resources
$ cat << 'EOF' > vm.yml
> # Create an Azure Virtual Machine instance and add a way to destroy it
> ---
> - hosts: local
>   connection: local
>
>   vars:
>     region: EastUS
>     prefix: myweb
>     subnet_name: internal
>     public_ip_name: external
>
>   tasks:
>   - name: Create a resource group
>     azure_rm_resourcegroup:
>       name: "{{ prefix }}-rg"
>       location: "{{ region }}"
>       state: present
>       tags:
>           Site: "{{ prefix }}.com"
>
>   - name: Create a DNS Zone
>     azure_rm_dnszone:
>       name: "{{ prefix }}.com"
>       resource_group: "{{ prefix }}-rg"
>       state: present
>       tags:
>           Site: "{{ prefix }}.com"
>
>   - name: Create a Virtual Network
>     azure_rm_virtualnetwork:
>       name: "{{ prefix }}-net"
>       address_prefixes: "10.0.0.0/16"
>       location: "{{ region }}"
>       resource_group: "{{ prefix }}-rg"
>       state: present
>       tags:
>           Site: "{{ prefix }}.com"
>
>   - name: Add a Subnet
>     azure_rm_subnet:
>       name: "{{ subnet_name }}"
>       resource_group: "{{ prefix }}-rg"
>       virtual_network: "{{ prefix }}-net"
>       address_prefix: "10.0.1.0/24"
>       state: present
>
>   - name: Allocate a Static Public IP
>     azure_rm_publicipaddress:
>       name: "{{ public_ip_name }}"
>       location: "{{ region }}"
>       resource_group: "{{ prefix }}-rg"
>       allocation_method: Static
>       state: present
>       tags:
>           Site: "{{ prefix }}.com"
>     register: static_public_ip
>
>   - name: Create a Network Security Group and allow inbound port(s)
>     azure_rm_securitygroup:
>       name: "{{ prefix }}-nsg"
>       location: "{{ region }}"
>       resource_group: "{{ prefix }}-rg"
>       rules:
>         - name: SSH
>           priority: 1001
>           direction: Inbound
>           access: Allow
>           protocol: Tcp
>           source_port_range: "*"
>           destination_port_range: 22
>           source_address_prefix: "*"
>           destination_address_prefix: "*"
>       state: present
>       tags:
>           Site: "{{ prefix }}.com"
> 
>   - name: Create a Network Interface with a Dynamic Private IP
>     azure_rm_networkinterface:
>       name: "{{ prefix }}-nic"
>       location: "{{ region }}"
>       resource_group: "{{ prefix }}-rg"
>       security_group: "{{ prefix }}-nsg"
>       virtual_network: "{{ prefix }}-net"
>       subnet_name: "{{ subnet_name }}"
>       ip_configurations:
>         - name: "{{ prefix }}-nic_conf"
>           private_ip_allocation_method: Dynamic
>           public_ip_address_name: "{{ public_ip_name }}"
>           primary: True
>       state: present
>       tags:
>           Site: "{{ prefix }}.com"
>
>   - name: Create an Ubuntu Virtual Machine with key based access and run a script on boot; use a Standard SSD
>     azure_rm_virtualmachine:
>       name: "{{ prefix }}-vm"
>       location: "{{ region }}"
>       resource_group: "{{ prefix }}-rg"
>       network_interfaces: "{{ prefix }}-nic"
>       vm_size: Basic_A1
?       image:
>         publisher: Canonical
>         offer: UbuntuServer
>         sku: '18.04-LTS'
>         version: latest
>       os_type: Linux
>       os_disk_name: "{{ prefix }}-disk"
>       os_disk_caching: ReadWrite
>       managed_disk_type: StandardSSD_LRS
>       short_hostname: "{{ prefix }}"
>       admin_username: ubuntu
>       custom_data: "{{ lookup('file', './scripts/install.sh') }}"
>       ssh_password_enabled: false
>       ssh_public_keys:
>             - path: /home/ubuntu/.ssh/authorized_keys
>               key_data: "{{ lookup('file', '~/.ssh/{{ prefix }}.pub') }}"
>       state: present
>       tags:
>           'Site': "{{ prefix }}.com"
>
>   - debug: msg="Public (static) IP is {{ static_public_ip.state.ip_address }} for {{ azure_vm.name }}"
>     when: static_public_ip.state.ip_address is defined
>
>   - debug: msg="Run this playbook for {{ azure_vm.name }} shortly to list the Public (static) IP."
>     when: static_public_ip.state.ip_address is not defined
>
>   - name: Destroy a Resource Group and all resources that fall under it
>     azure_rm_resourcegroup:
>       name: "{{ prefix }}-rg"
>       force_delete_nonempty: yes
>       state: absent
>     tags: [ 'never', 'destroy' ]
>
>   - name: Destroy the Network Watcher Resource Group and all resources that fall under it
>     azure_rm_resourcegroup:
>       name: "NetworkWatcherRG"
>       force_delete_nonempty: yes
>       state: absent
>     tags: [ 'never', 'destroy_networkwatcher' ]
> EOF

Create the shell script for custom_data:

$ cat << 'EOF' > scripts/install.sh
> #!/bin/bash
>
> MY_HOME="/home/ubuntu"
> export DEBIAN_FRONTEND=noninteractive
>
> # Install prereqs
> apt update
> apt install -y python3-pip apt-transport-https ca-certificates curl software-properties-common
> # Install docker
> curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
> add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
> apt update
> apt install -y docker-ce
> # Install docker-compose
> su ubuntu -c "mkdir -p $MY_HOME/.local/bin"
> su ubuntu -c "pip3 install docker-compose --upgrade --user && chmod 754 $MY_HOME/.local/bin/docker-compose"
> usermod -aG docker ubuntu
> # Add PATH
> printf "\nexport PATH=\$PATH:$MY_HOME/.local/bin\n" >> $MY_HOME/.bashrc
>
> exit 0
> EOF

Run the playbook:

$ ansible-playbook -i hosts vm.yml

Log on to the instance after a short while:

$ ssh -i ~/.ssh/myweb ubuntu@<The value of static_public_ip that was reported. One can also re-run the playbook to print it again.>

Type yes and hit enter to accept.

On the host (a short while is needed for the boot initialization script to complete):

$ docker --version
$ docker-compose --version
$ logout

Tear down the instance:

$ ansible-playbook -i hosts vm.yml --tags "destroy"

Destroy the Network Watcher Resource Group that was automatically created (if not found prior), if you do not have other virtual networks in the region which are using it:

$  ansible-playbook -i hosts vm.yml --tags "destroy_networkwatcher" 

<–

References:

Source:
ansible_myweb

Azure/Terraform – Provision a Virtual Machine instance using Infrastructure as Code

In this article we will use Terraform (Infrastructure as Code) to swiftly bring up a Microsoft Azure Virtual Machine instance in East US on a static IP, add a DNS Zone for the site in mention and install docker/docker-compose on it.

We will use ‘myweb’ as an example in this article, using the same base path of ‘dev’ that was previously created and the container-admin Service Principal.

Please use ‘Create your Azure free account today’ prior to commencing with this article.

–>
Go in to the dev directory/link located within your home directory:

$ cd ~/dev

Upgrade the Azure CLI on your host:

$ sudo apt update && sudo apt -y upgrade azure-cli

Grab Terraform:

$ wget https://releases.hashicorp.com/terraform/0.12.20/terraform_0.12.20_linux_amd64.zip

Install Unzip if you do not have it installed:

$ sudo apt -y install unzip

Unzip it to ~/.local/bin and set permissions accordingly on it:

$ unzip terraform_0.12.20_linux_amd64.zip -d ~/.local/bin && chmod 754 ~/.local/bin/terraform

Add this function in to your user’s startup to parse the previously created credentials file and pass pertinent login information as a servicePrincipal to Terraform, in a sub-shell:

$ cat << 'EOF' >> ~/.bashrc
>
> function terraform-az-sp() {
>         (export $(grep -v '^\[' $HOME/.azure/credentials | sed 's/application_id/arm_client_id/; s/client_secret/arm_client_secret/; s/directory_id/arm_tenant_id/; s/subscription_id/arm_subscription_id/; s/^[^=]*/\U&\E/' | xargs) && terraform $*)
> }
> EOF

Remove the subscription_id in our previously created az-login-sp function (user’s startup):

$ sed -i "s:\$HOME/.azure/credentials | xargs):\$HOME/.azure/credentials | sed '/subscription_id/d' | xargs):" ~/.bashrc

Source it in:

$ . ~/.bashrc

Add the <SUBSCRIPTION ID> (UI Console -> Azure Active Directory -> Search for (top left): Subscriptions -> Click your subscription -> Overview) in to the Azure credentials file (replace <SUBSCRIPTION ID>):

$ echo "subscription_id=<SUBSCRIPTION ID>" >> ~/.azure/credentials 

In the Subscription, add roles to container-admin:
Access control (IAM) -> Role assignments ->

Add (Add role assignment) -> Role: DNS Zone Contributor -> Assign access to: Azure AD user, group, or service principal -> Select: container-admin -> Save

Add (Add role assignment) -> Role: Network Contributor -> Assign access to: Azure AD user, group, or service principal -> Select: container-admin -> Save

Add (Add role assignment) -> Role: Virtual Machine Contributor -> Assign access to: Azure AD user, group, or service principal -> Select: container-admin -> Save

Create a work folder and change in to it:

$ mkdir -p terraform/azure/myweb/scripts terraform/azure/myweb/rbac && cd terraform/azure/myweb

Create a custom Role Based Access for Resource Groups so container-admin can Read, Write and Delete Resource Groups in the subscription (replace <SUBSCRIPTION ID>):

$ cat << 'EOF' > rbac/rg-custom.jsn
> {
>    "Name": "Resource Group Allowance",
>    "IsCustom": true,
>    "Description": "Can read, write and delete Resource Groups.",
>    "Actions": [
>       "Microsoft.Resources/subscriptions/resourceGroups/read",
>       "Microsoft.Resources/subscriptions/resourceGroups/write",
>       "Microsoft.Resources/subscriptions/resourceGroups/delete"
>    ],
>    "NotActions": [],
>    "AssignableScopes": [
>       "/subscriptions/<SUBSCRIPTION ID>"
>    ]
> }
> EOF

Authenticate to Azure using the CLI with the same Administrative credentials you use in the UI (a browser window will popup requesting credentials):

$ az login

Create the Role Definition:

$ az role definition create --role-definition rbac/rg-custom.jsn

Add the role to container-admin:

$ az role assignment create --role "Resource Group Allowance" --assignee $(grep application_id ~/.azure/credentials | cut -f2 -d=) --subscription $(grep subscription_id ~/.azure/credentials | cut -f2 -d=)

Generate an SSH Key Pair (no password) and restrict permissions on it:

$ ssh-keygen -q -t rsa -b 2048 -N '' -f ~/.ssh/myweb && chmod 400 ~/.ssh/myweb

Ensure the terraform version is greater then or equal to 0.12:

$ cat << 'EOF' > versions.tf
> terraform {
>   required_version = ">= 0.12"
> }
> EOF

Set the version for the AzureRM Provider to greater then or equal to 1.44:

$ cat << 'EOF' > provider.tf
> provider "azurerm" {
>   version = ">= 1.44"
> }
> EOF

Set the default region and prefix variable:

$ cat << 'EOF' > vars.tf
> variable "region" {
>   default = "EastUS"
> }
>
> variable "prefix" {
>   default = "myweb"
> }
> EOF

The following is performed with this script/code:

  • create a resource group where all of our resources will be put in (within East US)
  • create an Azure DNS Zone of myweb.com (no A records will be created)
  • create a virtual network of 10.0.0.0/16
  • add a subnet of 10.0.1.0/24 within the VNET
  • allocate a static Public IP
  • create a Network Security Group and add a Security rule for allowing SSH (port 22) Inbound
  • create a Network Interface with a Dynamic Private IP
  • create a Basic A1 instance based off of Ubuntu 18_04 with a Standard SSD, password authentication turned off, our public key added as authorized and reference an extraneous file for custom_data (initialization script on Virtual Machine boot)
  • tag all resources
$ cat << 'EOF' > vm.tf
> # Create a Resource Group
> resource "azurerm_resource_group" "myweb" {
>   name     = "${var.prefix}-rg"
>   location = var.region
>
>   tags = {
>         Site = "${var.prefix}.com"
>     }
> }
>
> # Create a DNS Zone
> resource "azurerm_dns_zone" "myweb" {
>   name                = "${var.prefix}.com"
>   resource_group_name = azurerm_resource_group.myweb.name
>
>   tags = {
>         Site = "${var.prefix}.com"
>     }
> }
>
> # Create a Virtual Network
> resource "azurerm_virtual_network" "myweb" {
>   name                = "${var.prefix}-net"
>   address_space       = ["10.0.0.0/16"]
>   location            = azurerm_resource_group.myweb.location
>   resource_group_name = azurerm_resource_group.myweb.name
>
>   tags = {
>         Site = "${var.prefix}.com"
>     }
> }
>
> # Add a Subnet
> resource "azurerm_subnet" "internal" {
>   name                 = "internal"
>   resource_group_name  = azurerm_resource_group.myweb.name
>   virtual_network_name = azurerm_virtual_network.myweb.name
>   address_prefix       = "10.0.1.0/24"
> }
>
> # Allocate a Static Public IP
> resource "azurerm_public_ip" "external" {
>   name                = "external"
>   location            = var.region
>   resource_group_name = azurerm_resource_group.myweb.name
>   allocation_method   = "Static"
>
>   tags = {
>         Site = "${var.prefix}.com"
>     }
> }
>
> # Create a Network Security Group and allow inbound port(s)
> resource "azurerm_network_security_group" "myweb" {
>   name                = "${var.prefix}-nsg"
>   location            = var.region
>   resource_group_name = azurerm_resource_group.myweb.name
>
>   security_rule {
>         name                       = "SSH"
>         priority                   = 1001
>         direction                  = "Inbound"
>         access                     = "Allow"
>         protocol                   = "Tcp"
>         source_port_range          = "*"
>         destination_port_range     = "22"
>         source_address_prefix      = "*"
>         destination_address_prefix = "*"
>     }
>
>   tags = {
>         Site = "${var.prefix}.com"
>     }
> }
>
> # Create a Network Interface with a Dynamic Private IP
> resource "azurerm_network_interface" "myweb" {
>   name                      = "${var.prefix}-nic"
>   location                  = azurerm_resource_group.myweb.location
>   resource_group_name       = azurerm_resource_group.myweb.name
>   network_security_group_id = azurerm_network_security_group.myweb.id
>
>   ip_configuration {
>        name                          = "${var.prefix}-nic_conf"
>        subnet_id                     = azurerm_subnet.internal.id
>        private_ip_address_allocation = "Dynamic"
>        public_ip_address_id          = azurerm_public_ip.external.id
>     }
>
>   tags = {
>         Site = "${var.prefix}.com"
>     }
> }
>
> # Create an Ubuntu Virtual Machine with key based access and run a script on boot; use a Standard SSD
> resource "azurerm_virtual_machine" "myweb" {
>   name                  = "${var.prefix}-vm"
>   location              = azurerm_resource_group.myweb.location
>   resource_group_name   = azurerm_resource_group.myweb.name
>   network_interface_ids = [azurerm_network_interface.myweb.id]
>   vm_size               = "Basic_A1"
>
>   storage_image_reference {
>       publisher = "Canonical"
>       offer     = "UbuntuServer"
>       sku       = "18.04-LTS"
>       version   = "latest"
>     }
>
>   storage_os_disk {
>       name              = "${var.prefix}-disk"
>       caching           = "ReadWrite"
>       create_option     = "FromImage"
>       managed_disk_type = "StandardSSD_LRS"
>     }
>
>   os_profile {
>       computer_name  = var.prefix
>       admin_username = "ubuntu"
>       custom_data    = data.template_file.init_script.rendered
>     }
>
>   os_profile_linux_config {
>         disable_password_authentication = true
>         ssh_keys {
>             path     = "/home/ubuntu/.ssh/authorized_keys"
>             key_data = file("~/.ssh/${var.prefix}.pub")
>           }
>     }
>
>   tags = {
>         Site = "${var.prefix}.com"
>     }
> }
> EOF

Output our allocated static Public IP after creation:

$ cat << 'EOF' > output.tf
> output "static_public_ip" {
>   value = azurerm_public_ip.external.ip_address
> }
> EOF

Create a template file to reference a boot initialization script:

$ cat << 'EOF' > install.tf
> data "template_file" "init_script" {
>   template = "${file("scripts/install.sh")}"
> }
> EOF

Create the shell script for custom_data:

$ cat << 'EOF' > scripts/install.sh
> #!/bin/bash
>
> MY_HOME="/home/ubuntu"
> export DEBIAN_FRONTEND=noninteractive
>
> # Install prereqs
> apt update
> apt install -y python3-pip apt-transport-https ca-certificates curl software-properties-common
> # Install docker
> curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
> add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
> apt update
> apt install -y docker-ce
> # Install docker-compose
> su ubuntu -c "mkdir -p $MY_HOME/.local/bin" 
> su ubuntu -c "pip3 install docker-compose --upgrade --user && chmod 754 $MY_HOME/.local/bin/docker-compose"
> usermod -aG docker ubuntu
> # Add PATH
> printf "\nexport PATH=\$PATH:$MY_HOME/.local/bin\n" >> $MY_HOME/.bashrc
>
> exit 0
> EOF

Initialize the directory:

$ terraform init

Run a dry-run to see what will occur:

$ terraform-az-sp plan

Provision:

$ terraform-az-sp apply -auto-approve

Log on to the instance after a short while:

$ ssh -i ~/.ssh/myweb ubuntu@<The value of static_public_ip that was reported.  One can also use 'terraform-az-sp output static_public_ip' to print it again.>

Type yes and hit enter to accept.

On the host (a short while is needed for the boot initialization script to complete):

$ docker --version
$ docker-compose --version
$ logout

Tear down what was created by first performing a dry-run to see what will occur:

$ terraform-az-sp plan -destroy 

Tear down the instance:

$ terraform-az-sp destroy -auto-approve

Destroy the Network Watcher Resource Group that was automatically created (if not found prior), if you do not have other virtual networks in the region which are using it:

$ az group delete -n NetworkWatcherRG --yes

Logout of the Azure CLI session:

$ az logout

<–

References:

Source:
terraform_azure_myweb

Firmware – CyanogenMod 12.1 (Lollipop) – Transformer Pad (TF701T – Macallan)

This is CyanogenMod 12.1 (Lollipop (5.1.1 – LMY49F)) for ASUS’s Transformer Pad (TF701T – Macallan).

Local: Use OpenJDK 1.7.0_111 64bit.

-sync latest changes from CyanogenMod OS repo (cm-12.1)

…..

Rom Base:
Sync’d as of 2/15.

Recovery:
CM Recovery
-Note:
You can also Use CWM 6.0.5.1 (select ‘No’ after flash to CWM’s offer to fix root).

Note:
-You must be unlocked (use ASUS’s unlock tool to perform this)
-You must be on at least 10.26.1.18 bootloader but it is reccomended to update to 11.4.1.29 (updating to ASUS’s latest 4.4.2 release will install this). It is best to update to this build (11.4.1.29) via MicroSD update method (as outlined on ASUS’s site). You can also get a CWM flash-able boot-loader package here (US only): http://droidbasement.com/asus/tf701t/stock/4.4.2-11.4.1.29/TF701T_K00C-11.4.1.29-US_BL.zip

…..

Known Issues:
-When phone audio is selected on a supported BT device; mediaserver will segfault and could cause system instability.
-Userdata encryption is currently not operational.

…..

Enjoy!

For:
TF701T

…..

Installation Instructions:
-boot in to bootloader (power and vol +/-)
-fastboot boot recovery.img (you can fastboot flash recovery recovery.img for permanence); wait a few seconds for the recovery screen.
-adb shell mount /data
-adb push cm-12.1-YYYYMMDD-UNOFFICIAL-tf701t.zip /data/media/
-take a nandroid backup (CWM only)
-flash cm-12.1-YYYYMMDD-UNOFFICIAL-tf701t.zip
-transfer and flash gapps for Android 5.1: http://download.dirtyunicorns.com/files/gapps/banks_gapps/5.x.x/ (latest for 5.1.x is 10-20-15)
-it is best to wipe when coming from stock (Note: You can upgrade from CM12)
-reboot

…..

Download:
TF701T: http://droidbasement.com/asus/tf701t/cm/12.1
Recovery: http://droidbasement.com/asus/tf701t/recovery/cm/12.1

…..

Source: https://github.com/cyanogenmod , https://github.com/pershoot

Firmware – CyanogenMod 11 (KitKat) – Transformer Pad (TF701T – Macallan)

This is CyanogenMod 11 (KitKat (4.4.4 – KTU84Q)) for ASUS’s Transformer Pad (TF701T – Macallan).

Local: Use Open JDK 1.7.0_111 64bit.

-sync latest changes from CyanogenMod OS repo (cm-11.0)

…..

Rom Base:
Sync’d as of 2/15.

Recovery:
Built recovery is CWM 6.0.5.1 (cm-11.0).

Note:
-You must be unlocked (use ASUS’s unlock tool to perform this)
-You must be on at least 10.26.1.18 bootloader but it is reccomended to update to 11.4.1.29 (updating to ASUS’s latest 4.4.2 release will install this). It is best to update to this build (11.4.1.29) via MicroSD update method (as outlined on ASUS’s site). You can also get a CWM flash-able boot-loader package here (US only): http://droidbasement.com/asus/tf701t/stock/4.4.2-11.4.1.29/TF701T_K00C-11.4.1.29-US_BL.zip

…..

Known Issues:

…..

Enjoy!

For:
TF701T

…..

Installation Instructions:
-boot in to bootloader (power and vol +/-)
-fastboot boot recovery.img (you can fastboot flash recovery recovery.img for permanence)
-adb shell mount /data
-adb push cm-11-YYYYMMDD-UNOFFICIAL-tf701t.zip /data/media/
-take a nandroid backup
-flash lineage-11-YYYYMMDD-UNOFFICIAL-tf701t.zip
-transfer and flash gapps for CM11: http://wiki.cyanogenmod.org/w/Google_Apps
-it is best to wipe when coming from stock
-reboot

…..

Download:
TF701T: http://droidbasement.com/asus/tf701t/cm
Recovery: http://droidbasement.com/asus/tf701t/recovery/

…..

Source: https://github.com/cyanogenmod , https://github.com/pershoot

Firmware – Asuswrt-Merlin (NG) – 384.15 – RT-AC68

This is Merlin’s Asuswrt (NG) 384.15 for the ASUS RT-AC68U/R.

-sync latest changes from RMerlin (384.15).

—–

Download (ASUS RT-AC68U/R):
RT-AC68U_384.15_0.trx
Download: RT-AC68U_384.15_0.trx

—–

Source:
https://github.com/pershoot/asuswrt-merlin.ng
https://github.com/RMerl/asuswrt-merlin.ng

——–

Installation instructions:

-Flash the .trx through the UI
-After it is completed and you are returned back to the UI, wait a short while (~30 seconds) then power cycle the router (with the on/off button).

AWS/Terraform/Ansible/OpenShift – Provision a Lightsail instance and further configure it using Infrastructure as Code

In this article we will Provision a LightSail host with docker/docker-compose on it using Terraform and install/initialize OpenShift Origin on it using Ansible.

OpenShift is Red Hat’s containerization platform which utilizes Kubernetes. Origin (what we will be working with here) is the opensource implementation of it.

We will use ‘myweb’ as an example in this article, using the same base path of ‘dev’ that was previously created, the container-admin group and using ~/.local/bin for the binaries.

Please ensure you have gone through the previous Terraform, Ansible, related preceding articles and ‘Get started with Lightsail for free’.

–>
Edit the IAM Policy “AllowLightsail”:
AWS UI Console -> Services -> Security, Identity, & Compliance -> IAM -> Policies -> AllowLightsail -> Edit Policy -> JSON ->

Append lightsail:OpenInstancePublicPorts and lightsail:CloseInstancePublicPorts after lightsail:ReleaseStaticIp.

It will look like this:

                  "lightsail:ReleaseStaticIp",
                  "lightsail:OpenInstancePublicPorts",
                  "lightsail:CloseInstancePublicPorts"

Review Policy -> Save Changes

Go in to the dev directory/link located within your home directory:

$ cd ~/dev

Upgrade the AWS CLI on your host:

$ pip3 install awscli --upgrade --user && chmod 754 ~/.local/bin/aws

Grab the latest version of Terraform:

$ wget https://releases.hashicorp.com/terraform/0.12.20/terraform_0.12.20_linux_amd64.zip

Unzip it to ~/.local/bin and set permissions accordingly on it:

$ unzip terraform_0.12.20_linux_amd64.zip -d ~/.local/bin && chmod 754 ~/.local/bin/terraform

Change to the myweb directory inside terraform:

$ cd terraform/myweb

Upgrade our code to 12.20 (type yes and enter when prompted):

$ terraform 0.12upgrade

Change our instance from a micro to a medium so it will have sufficient resources to run OpenShift Origin and related:

$ sed -i s:micro_2_0:medium_2_0: lightsail.tf

Output the Public IP of the Provisioned host (along with connection parameters and variables) in to a file which we will feed in to an Ansible playbook run:

$ cat << 'EOF' >> output.tf
>
> resource "local_file" "hosts" {
>   content              = "[vps]\n${aws_lightsail_static_ip.myweb.ip_address} ansible_connection=ssh ansible_user=ubuntu ansible_ssh_private_key_file=~/.ssh/myweb instance=${aws_lightsail_instance.myweb.name}"
>   filename             = "${path.module}/../../ansible/hosts"
>   directory_permission = 0754
>   file_permission      = 0664
> }
> EOF 

Amend an item from the user_data script:

$ sed -i 's:sudo apt-key add -:apt-key add -:' scripts/install.sh

Initialize the directory/refresh module(s):

$ terraform init

Run a dry-run to see what will occur:

$ terraform plan

Provision:

$ terraform apply -auto-approve

Create a work folder for an Ansible playbook:

$ cd ../../ansible
$ mkdir -p openshift/scripts && cd openshift

Create an Ansible playbook which will install/initialize OpenShift Origin on our provisioned host:

$ cat << 'EOF' > openshift.yml 
> # Install, initialize OpenShift Origin and create a destroy routine for it
> ---
> - hosts: vps
>   connection: local
>
>   vars:
>     openshift_directory: /home/ubuntu/.local/etc/openshift
>
>   tasks:
>     - name: Discover Services
>       service_facts:
>
>     - name: Check if openshift directory exists
>       stat:
>         path: "{{ openshift_directory }}"
>       register: openshift_dir
>
>     - name: Open Firewall Ports
>       delegate_to: localhost
>       command: bash -c './scripts/firewall.sh open {{ hostvars[groups['vps'][0]].instance }}'
>       when:
>         - "'docker' in services"
>         - openshift_dir.stat.exists == False
>  
>     - name: Copy and Run install
>       environment:
>         PATH: "{{ ansible_env.PATH}}:{{ openshift_directory }}/../../bin"
>       args:
>         executable: /bin/bash
>       script: "./scripts/install.sh {{ ansible_ssh_host }}"
>       when:
>         - "'docker' in services"
>         - openshift_dir.stat.exists == False
>
>     - debug: msg="Please install docker to proceed."
>       when: "'docker' not in services"
>
>     - debug: msg="Install script has already been completed.  Run this playbook with the destroy tag, then run once again normally to re-intialize openshift."
>       when: openshift_dir.stat.exists == True
>
>     - name: Destroy
>       become: yes
>       environment:
>         PATH: "{{ ansible_env.PATH}}:{{ openshift_directory }}/../../bin"
>       args:
>         executable: /bin/bash
>       shell: "cd {{ openshift_directory }} && oc cluster down && cd ../ && rm -r {{ openshift_directory }}/../../../.kube {{ openshift_directory }}"
>       tags: [ 'never', 'destroy' ]
>
>     - name: Close Firewall Ports
>       delegate_to: localhost
>       command: bash -c './scripts/firewall.sh close {{ hostvars[groups['vps'][0]].instance }}'
>       tags: [ 'never', 'destroy' ]
> EOF    

Create a shell script which will pull the latest release of client tools from GitHub, place the needed binaries in ~/.local/bin, set insecure registry on Docker and initialize:

$ cat << 'EOF' > scripts/install.sh
> #!/bin/bash
> [[ -z $* ]] && { echo "Please specify a Public IP or Host/Domain name." && exit 1; }
> # Fetch and Install
> file_url="$(curl -sL https://github.com/openshift/origin/releases/latest | grep "download.*client.*linux-64" | cut -f2 -d\" | sed 's/^/https:\/\/github.com/')"
> [[ -z $file_url ]] && { echo "The URL could not be obtained.  Please try again shortly." && exit 1; }
> file_name="$(echo $file_url | cut -f9 -d/)"
> if [[ ! -f $file_name ]]; then
>         curl -sL $file_url --output $file_name
>         folder_name="$(tar ztf $file_name 2>/dev/null | head -1 | sed s:/.*::)"
>         [[ -z $folder_name ]] && { echo "The archive could not be read.  Please try again." && rm -f $file_name && exit 1; }
>         tar zxf $file_name
>         mv $folder_name/oc $folder_name/kubectl $HOME/.local/bin && rm -r $folder_name
>         chmod 754 $HOME/.local/bin/oc $HOME/.local/bin/kubectl
> fi
> # Docker insecure
> [[ $(grep insecure /etc/docker/daemon.json &>/dev/null; echo $?) -eq 2 ]] && redirect=">"
> [[ $(grep insecure /etc/docker/daemon.json &>/dev/null; echo $?) -eq 1 ]] && redirect=">>"
> [[ $(grep insecure /etc/docker/daemon.json &>/dev/null; echo $?) -eq 0 ]] || { sudo bash -c "cat << 'EOF' $redirect /etc/docker/daemon.json
> {
>         \"insecure-registries\" : [ \"172.30.0.0/16\" ]
> }
> EOF" && sudo systemctl restart docker; }
> # OpenShift Origin up
> [[ ! -d $HOME/.local/etc/openshift ]] && { mkdir -p $HOME/.local/etc/openshift && cd $HOME/.local/etc/openshift; } || { cd $HOME/.local/etc/openshift && oc cluster down; }
> oc cluster up --public-hostname=$1
>
> exit 0
> EOF 

The Lightsail firewall functionality is currently being implemented in Terraform and is not available in Ansible. In the interim, we will create a shell script to open and close ports needed by OpenShift Origin (using the AWS CLI). This script will be run locally via the Playbook during the create and destroy routines:

$ cat << 'EOF' > scripts/firewall.sh && chmod 754 scripts/firewall.sh
> #!/bin/bash
> #
> openshift_ports="53/UDP 443/TCP 1936/TCP 4001/TCP 7001/TCP 8053/UDP 8443/TCP 10250_10259/TCP"
> #
> [[ -z $* || $(echo $* | xargs -n1 | wc -l) -ne 2 || ! ($* =~ $(echo '\<open\>') || $* =~ $(echo '\<close\>')) ]] && { echo "Please pass in the desired action [ open, close ] and instance [ site_myweb ]." && exit 2; }
> #
> instance="$(echo $* | xargs -n1 | sed '/\<open\>/d; /\<close\>/d')"
> [[ -z $instance ]] && { echo "Please double-check the passed in instance." && exit 1; }
> action="$(echo $* | xargs -n1 | grep -v $instance)"
> #
> for port in $openshift_ports; do
>         aws lightsail $action-instance-public-ports --instance $instance --port-info fromPort=$(echo $port | cut -f1 -d_ | cut -f1  -d/),protocol=$(echo $port | cut -f2 -d/),toPort=$(echo $port | cut -f2 -d_ | cut -f1 -d/)
> done
> #
>
> exit 0
> EOF 

Run the Ansible playbook after a few minutes (accept the host key by typing yes and hitting enter when prompted):

$ ansible-playbook -i ../hosts openshift.yml

After a short while, log on to the instance:

$ ssh -i ~/.ssh/myweb ubuntu@<The value of static_public_ip that was reported.  One can also use 'terraform output static_public_ip' to print it again.>

To get an overview of the current project with any identified issues:

$ oc status --suggest

Log on as Admin via CMD Line and switch to the default project:

$ oc login -u system:admin -n default

Logout of the session:

$ oc logout

Please see the Command-Line Walkthrough.

Logout from the host:

$ logout

Log on as Admin via Web Browser (replace <PUBLIC_IP>):

https://<PUBLIC_IP>:8443/console (You will get a Certificate/Site warning due to a mismatch).

Please see the Web Console Walkthrough.

To shut down the OpenShift Origin cluster, destroy the working folder and start anew (you can re-run the playbook normally to reinitialize):

$ ansible-playbook -i ../hosts openshift.yml --tags "destroy"

Tear down what was created by first performing a dry-run to see what will occur:

$ cd ../../terraform/myweb && terraform plan -destroy 

Tear down the instance:

$ terraform destroy -auto-approve

<–

References:
how-to-install-openshift-origin-on-ubuntu-18-04

Source:
ansible_openshift

Firmware – Asuswrt-Merlin (NG) – 384.15_beta1 – RT-AC68

This is Merlin’s Asuswrt (NG) 384.15_beta1 for the ASUS RT-AC68U/R.

-sync latest changes from RMerlin (384.15_beta1).

—–

Download (ASUS RT-AC68U/R):
RT-AC68U_384.15_beta1.trx
Download: RT-AC68U_384.15_beta1.trx

—–

Source:
https://github.com/pershoot/asuswrt-merlin.ng
https://github.com/RMerl/asuswrt-merlin.ng

——–

Installation instructions:

-Flash the .trx through the UI
-After it is completed and you are returned back to the UI, wait a short while (~30 seconds) then power cycle the router (with the on/off button).

Firmware – Asuswrt-Merlin (NG) – 384.14_2 – RT-AC68

This is Merlin’s Asuswrt (NG) 384.14_2 for the ASUS RT-AC68U/R.

-sync latest changes from RMerlin (384.14_2).

—–

Download (ASUS RT-AC68U/R):
RT-AC68U_384.14_2.trx
Download: RT-AC68U_384.14_2.trx

—–

Source:
https://github.com/pershoot/asuswrt-merlin.ng
https://github.com/RMerl/asuswrt-merlin.ng

——–

Installation instructions:

-Flash the .trx through the UI
-After it is completed and you are returned back to the UI, wait a short while (~30 seconds) then power cycle the router (with the on/off button).