Latest Entries »

Azure/AWS – Federated login from AAD to the AWS Console

Federated Identity allows us to access various systems with a single authentication token that is trusted. It provides us with SSO (Single Sign on) and allows us to enter systems without different repeated password entries.

In this article we will setup Federated Access to the AWS Console from Microsoft Azure (via Azure Active Directory), giving us SSO capability.

We will be using the SystemAdministrator AWS Managed Policy and the user will be container_admin to coincide with the rest of the articles in this section.

Login to the AWS Console as an Administrator with Full access and/or the root account.
Login to Microsoft Azure as Global Administrator of the Tenant you are wanting to SSO from.

Enable AWS Organizations (if you have already enabled organizations then Settings -> Enable all features):
AWS -> All Services -> AWS Organizations -> Create organization -> Enable all features

Check your email and click the link sent from AWS to finalize the change of enabling all features

Enable AWS SSO:
AWS -> All Services -> AWS Single Sign-On -> Enable AWS SSO

Create the Azure user you are wanting to use for SSO:
Azure Active Directory -> Users | All users -> New user -> Create user -> User name: container_admin -> Name: container admin -> First name: container -> Last name: admin -> Password -> Auto-generate password
-> Tick Show Password (Take note of the password) -> Create

Add the AWS Gallery application in to Azure AD via Enterprise applications:
Azure -> Azure Services -> Azure Active Directory -> Enterprise applications (Manage) -> All applications (Manage) -> New application -> Cloud platforms: Amazon Web Services (AWS) (black bordered box) -> Create

Enterprise Application | All applications -> Amazon Web Services (AWS) -> Single sign-on (Manage) -> SAML -> Click Yes on Save single sign-on setting popup (Identifier (Entity ID) and Reply URL)

Create a SAML (Security Assertion Markup Language) certificate and activate it:
SAML Signing Certificate -> Edit -> Add a certificate -> New Certificate -> Save -> Select the toolbar on the certificate -> Make certificate active -> Yes -> Close pane

Dismiss the ‘Test single sign-on with Amazon Web Services (AWS)’ popup (i.e. No, I’ll test later).

Download the Federation data for the later created Identity Provider in AWS:
SAML Signing Certificate -> Download Federation Metadata XML

Modify the Federated SSO session to 3 hours (from 15 minutes (900 seconds)):
User Attributes & Claims -> Edit -> Select the SessionDuration Claim name -> Source attribute -> “10800” -> Save

Create the Identity Provider:
AWS -> IAM -> Identity Providers -> Create Provider -> Provider Type -> SAML -> Provider Name: AzureAD -> Metadata Document -> Choose File (the downloaded Federated Metadata XML file) -> Next Step -> Create

Create the role and assign the SystemAdministrator AWS managed policy to it:
IAM -> Roles -> Create role -> SAML 2.0 federation -> SAML provider -> AzureAD -> Allow programmatic and AWS Management Console access -> Next permissions -> select SystemAdministrator -> Next: Tags -> Next: Review -> Role name: SystemAdministrator -> Create role

Create a policy which will allow a fetch of roles:
IAM -> Policies -> Create policy -> JSON ->

 {
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
            "iam:ListRoles"
            ],
            "Resource": "*"
        }
    ]
}

-> Review policy -> Name: AzureAD_SSOUserRole_Policy -> Description: This policy will allow to fetch the roles from AWS accounts. -> Create policy

Create a user (to be used in Azure’s AWS Gallery application for Provisioning) and assign it the previously created policy:
IAM -> Users -> Add user -> User name: AzureADRoleManager -> select Programmatic access -> Next: Permissions -> Attach existing policies directly -> select AzureAD_SSOUserRole_Policy -> Next: Tags -> Next: Review -> Create user -> Take note of the Access ID and the Secret Access key

Configure Provisioning:
Azure -> Enterprise applications | All applications -> Amazon Web Services (AWS) -> Provisioning -> Get started -> Provisioning Mode: Automatic -> Admin Credentials: clientsecret (enter in the Access ID) -> Secret Token: (enter in the Secret Access Key) -> Test Connection -> Save -> Provisioning Status: On -> Save

Wait some time for the Incremental cycle to be completed (it should show finished shortly thereafter but you should wait another 15 minutes or so).

Assign a user to the AWS gallery application:
Users and groups -> Add user -> Users and groups -> select container_admin -> select -> Select a role -> SystemAdministrator,AzureAD -> Select -> Assign

Note: If you see DefaultAccess and/or you are not able to select the role, assign the user then go back in and assign the role (it must say SystemAdministrator,AzureAD)

In a browser:
https://myapplications.microsoft.com/ -> Login as container_admin@TENANT.onmicrosoft.com -> Change the password -> Click on the Amazon Web Services icon -> click on container_admin@TENANT.onmicrosoft.com in the popup list -> You will be taken to the AWS Console

Up top, select the user and in the dropdown you will see:
Federated Login: SystemAdministrator/container_admin@TENANT.onmicrosoft.com

Source:
amazon-web-service-tutorial (https://docs NULL.microsoft NULL.com/en-us/azure/active-directory/saas-apps/amazon-web-service-tutorial)

Firmware – Asuswrt-Merlin (NG) – 384.19 – RT-AC68

This is Merlin’s Asuswrt (NG) 384.19 for the ASUS RT-AC68U/R.

-sync latest changes from RMerlin (384.19-mainline).

—–

Download (ASUS RT-AC68U/R):
RT-AC68U_384.19_0.trx
Download: RT-AC68U_384.19_0.trx (http://droidbasement NULL.com/asus/rt-ac68/merlin/)

—–

Source:
https://github.com/pershoot/asuswrt-merlin.ng (https://github NULL.com/pershoot/asuswrt-merlin NULL.ng)
https://github.com/RMerl/asuswrt-merlin.ng (https://github NULL.com/RMerl/asuswrt-merlin NULL.ng)

——–

Installation instructions:

-Flash the .trx through the UI
-After it is completed and you are returned back to the UI, wait a short while (~30 seconds) then power cycle the router (with the on/off button).

Firmware – Asuswrt-Merlin (NG) – 384.18 – RT-AC68

This is Merlin’s Asuswrt (NG) 384.18 for the ASUS RT-AC68U/R.

-sync latest changes from RMerlin (master).

—–

Download (ASUS RT-AC68U/R):
RT-AC68U_384.18_0.trx
Download: RT-AC68U_384.18_0.trx (http://droidbasement NULL.com/asus/rt-ac68/merlin/)

—–

Source:
https://github.com/pershoot/asuswrt-merlin.ng (https://github NULL.com/pershoot/asuswrt-merlin NULL.ng)
https://github.com/RMerl/asuswrt-merlin.ng (https://github NULL.com/RMerl/asuswrt-merlin NULL.ng)

——–

Installation instructions:

-Flash the .trx through the UI
-After it is completed and you are returned back to the UI, wait a short while (~30 seconds) then power cycle the router (with the on/off button).


Firmware – Asuswrt-Merlin (NG) – 384.18_beta1 – RT-AC68

This is Merlin’s Asuswrt (NG) 384.18_beta1 for the ASUS RT-AC68U/R.

-sync latest changes from RMerlin (master).

—–

Download (ASUS RT-AC68U/R):
RT-AC68U_384.18_beta1.trx
Download: RT-AC68U_384.18_beta1.trx (http://droidbasement NULL.com/asus/rt-ac68/merlin/)

—–

Source:
https://github.com/pershoot/asuswrt-merlin.ng (https://github NULL.com/pershoot/asuswrt-merlin NULL.ng)
https://github.com/RMerl/asuswrt-merlin.ng (https://github NULL.com/RMerl/asuswrt-merlin NULL.ng)

——–

Installation instructions:

-Flash the .trx through the UI
-After it is completed and you are returned back to the UI, wait a short while (~30 seconds) then power cycle the router (with the on/off button).


Firmware – Asuswrt-Merlin (NG) – 384.17 – RT-AC68

This is Merlin’s Asuswrt (NG) 384.17 for the ASUS RT-AC68U/R.

-sync latest changes from RMerlin (master).

—–

Download (ASUS RT-AC68U/R):
RT-AC68U_384.17_0.trx
Download: RT-AC68U_384.17_0.trx (http://droidbasement NULL.com/asus/rt-ac68/merlin/)

—–

Source:
https://github.com/pershoot/asuswrt-merlin.ng (https://github NULL.com/pershoot/asuswrt-merlin NULL.ng)
https://github.com/RMerl/asuswrt-merlin.ng (https://github NULL.com/RMerl/asuswrt-merlin NULL.ng)

——–

Installation instructions:

-Flash the .trx through the UI
-After it is completed and you are returned back to the UI, wait a short while (~30 seconds) then power cycle the router (with the on/off button).

Firmware – Asuswrt-Merlin (NG) – 384.16_1 – RT-AC68

This is Merlin’s Asuswrt (NG) 384.16_1 for the ASUS RT-AC68U/R.

-sync latest changes from RMerlin (master).

—–

Download (ASUS RT-AC68U/R):
RT-AC68U_384.16_1.trx
Download: RT-AC68U_384.16_1.trx (http://droidbasement NULL.com/asus/rt-ac68/merlin/)

—–

Source:
https://github.com/pershoot/asuswrt-merlin.ng (https://github NULL.com/pershoot/asuswrt-merlin NULL.ng)
https://github.com/RMerl/asuswrt-merlin.ng (https://github NULL.com/RMerl/asuswrt-merlin NULL.ng)

——–

Installation instructions:

-Flash the .trx through the UI
-After it is completed and you are returned back to the UI, wait a short while (~30 seconds) then power cycle the router (with the on/off button).

Firmware – Asuswrt-Merlin (NG) – 384.16_beta2 – RT-AC68

This is Merlin’s Asuswrt (NG) 384.16_beta2 for the ASUS RT-AC68U/R.

-sync latest changes from RMerlin (384.16-beta2).

—–

Download (ASUS RT-AC68U/R):
RT-AC68U_384.16_beta2.trx
Download: RT-AC68U_384.16_beta2.trx (http://droidbasement NULL.com/asus/rt-ac68/merlin/)

—–

Source:
https://github.com/pershoot/asuswrt-merlin.ng (https://github NULL.com/pershoot/asuswrt-merlin NULL.ng)
https://github.com/RMerl/asuswrt-merlin.ng (https://github NULL.com/RMerl/asuswrt-merlin NULL.ng)

——–

Installation instructions:

-Flash the .trx through the UI
-After it is completed and you are returned back to the UI, wait a short while (~30 seconds) then power cycle the router (with the on/off button).

Firmware – Asuswrt-Merlin (NG) – 384.16_beta1 – RT-AC68

This is Merlin’s Asuswrt (NG) 384.16_beta1 for the ASUS RT-AC68U/R.

-sync latest changes from RMerlin (384.16-beta1-mainline).

—–

Download (ASUS RT-AC68U/R):
RT-AC68U_384.16_beta1.trx
Download: RT-AC68U_384.16_beta1.trx (http://droidbasement NULL.com/asus/rt-ac68/merlin/)

—–

Source:
https://github.com/pershoot/asuswrt-merlin.ng (https://github NULL.com/pershoot/asuswrt-merlin NULL.ng)
https://github.com/RMerl/asuswrt-merlin.ng (https://github NULL.com/RMerl/asuswrt-merlin NULL.ng)

——–

Installation instructions:

-Flash the .trx through the UI
-After it is completed and you are returned back to the UI, wait a short while (~30 seconds) then power cycle the router (with the on/off button).

Firmware – Asuswrt-Merlin (NG) – 384.16_alpha2 – RT-AC68

This is Merlin’s Asuswrt (NG) 384.16_alpha2 for the ASUS RT-AC68U/R.

-sync latest changes from RMerlin (master).

—–

Download (ASUS RT-AC68U/R):
RT-AC68U_384.16_alpha2.trx
Download: RT-AC68U_384.16_alpha2.trx (http://droidbasement NULL.com/asus/rt-ac68/merlin/)

—–

Source:
https://github.com/pershoot/asuswrt-merlin.ng (https://github NULL.com/pershoot/asuswrt-merlin NULL.ng)
https://github.com/RMerl/asuswrt-merlin.ng (https://github NULL.com/RMerl/asuswrt-merlin NULL.ng)

——–

Installation instructions:

-Flash the .trx through the UI
-After it is completed and you are returned back to the UI, wait a short while (~30 seconds) then power cycle the router (with the on/off button).

AWS/Terraform/Ansible/OpenShift – Provision an EC2 instance and further configure it using Infrastructure as Code

Note: This is a duplicate of the AWS Lightsail article, modified for EC2 with some additional amendments.

In this article we will Provision an EC2 host with docker/docker-compose on it using Terraform and install/initialize OpenShift Origin on it using Ansible.

OpenShift (https://www NULL.openshift NULL.com/) is Red Hat’s containerization platform which utilizes Kubernetes. Origin (https://www NULL.okd NULL.io/) (what we will be working with here) is the opensource implementation of it.

We will use ‘myweb’ as an example in this article, using the same base path of ‘dev’ that was previously created, the container-admin group and using ~/.local/bin for the binaries.

Please ensure you have gone through the previous Terraform, Ansible and related preceding articles.

Please use ‘ (https://portal NULL.aws NULL.amazon NULL.com/billing/signup?client=lightsail&fid=1A3F6B376ECAC516-2C15C39C5ACECACB&redirect_url=https%3A%2F%2Flightsail NULL.aws NULL.amazon NULL.com%2Fls%2Fsignup#/start)AWS Free Tier (https://aws NULL.amazon NULL.com/free/?all-free-tier NULL.sort-by=item NULL.additionalFields NULL.SortRank&all-free-tier NULL.sort-order=asc)‘ (https://portal NULL.aws NULL.amazon NULL.com/billing/signup?client=lightsail&fid=1A3F6B376ECAC516-2C15C39C5ACECACB&redirect_url=https%3A%2F%2Flightsail NULL.aws NULL.amazon NULL.com%2Fls%2Fsignup#/start) prior to commencing with this article.

–>
Go in to the dev directory/link located within your home directory:

$ cd ~/dev

Update PIP:

$ python3 -m pip install --upgrade --user pip

If there was an update, then forget remembered location references in the shell environment:

$ hash -r pip 

Upgrade the AWS CLI on your host:

$ pip3 install awscli --upgrade --user && chmod 754 ~/.local/bin/aws

Install/Upgrade Ansible:

$ pip3 install ansible --upgrade --user && chmod 754 ~/.local/bin/ansible ~/.local/bin/ansible-playbook

Install/Upgrade Boto3:

$ pip3 install boto3 --upgrade --user

Grab the latest version of Terraform:

$ wget https://releases.hashicorp.com/terraform/0.12.23/terraform_0.12.23_linux_amd64.zip

Unzip it to ~/.local/bin and set permissions accordingly on it (type y and hit enter to replace if upgrading, at the prompt):

$ unzip terraform_0.12.23_linux_amd64.zip -d ~/.local/bin && chmod 754 ~/.local/bin/terraform

Change to the myweb directory inside terraform/aws:

$ cd terraform/aws/myweb

Change our instance from a micro to a medium, so it will have sufficient resources to run OpenShift Origin and related:

$ sed -i s:t3a.micro:t3a.medium: ec2.tf

Output the Public IP of the Provisioned host (along with connection parameters and variables) in to a file which we will feed in to an Ansible playbook run.

Note: Please re-create the file if you have went through the previous Terraform articles:

$ cat << 'EOF' > output.tf
> output "static_public_ip" {
>   value = var.lightsail ? element(aws_lightsail_static_ip.myweb[*].ip_address, 0) : element(aws_eip.external[*].public_ip, 0)
> }
>
> resource "local_file" "hosts" {
>   content              = trimspace("[vps]\n${var.lightsail ? element(aws_lightsail_static_ip.myweb[*].ip_address, 0) : element(aws_eip.external[*].public_ip, 0)} ansible_connection=ssh ansible_user=ubuntu ansible_ssh_private_key_file=~/.ssh/${var.prefix} instance=${var.lightsail ? element(aws_lightsail_instance.myweb[*].name, 0) : element(aws_instance.myweb[*].tags["Name"], 0)} ${var.lightsail ? "" : "instance_sg=${element(aws_security_group.myweb[*].name, 0)}"} ${var.lightsail ? "" : "instance_sg_id=${element(aws_security_group.myweb[*].id, 0)}"} ${var.lightsail ? "" : "instance_vpc_id=${element(aws_vpc.myweb[*].id, 0)}"}")
>   filename             = pathexpand("~/dev/ansible/hosts-aws")
>   directory_permission = 0754
>   file_permission      = 0664
> }
> EOF

Amend an item from the user_data script (if you have went through the AWS/Terraform/Ansible/OpenShift against Lightsail article then this can be disregarded):

$ sed -i 's:sudo apt-key add -:apt-key add -:' scripts/install.sh

Initialize the directory/refresh module(s):

$ terraform init

Run a dry-run to see what will occur:

$ terraform plan -var 'lightsail=false'

Provision:

$ terraform apply -var 'lightsail=false' -auto-approve

Create a work folder for an Ansible playbook:

$ cd ../../../ansible
$ mkdir -p openshift/scripts && cd openshift

Create an Ansible playbook which will install/initialize OpenShift Origin on our provisioned host.

Note: This accommodates our previous implementation against AWS Lightsail and Microsoft Azure VM:

$ cat << 'EOF' > openshift.yml 
> # Install, initialize OpenShift Origin and create a destroy routine for it
> # This is a unified setup against AWS Lightsail, Microsoft Azure VM and AWS EC2
> ---
> - hosts: vps
>   connection: local
>
>   vars:
>     network_security_group: "{{ hostvars[groups['vps'][0]].instance_nsg }}"
>     instance: "{{ hostvars[groups['vps'][0]].instance }}"
>     resource_group: "{{ hostvars[groups['vps'][0]].instance_rg }}"
>     security_group: "{{ hostvars[groups['vps'][0]].instance_sg }}"
>     security_group_id: "{{ hostvars[groups['vps'][0]].instance_sg_id }}"
>     virtual_private_cloud_id: "{{ hostvars[groups['vps'][0]].instance_vpc_id }}"
>     openshift_directory: /home/ubuntu/.local/etc/openshift
>     ansible_python_interpreter: /usr/bin/python3
>
>   tasks:
>     - name: Discover Services
>       service_facts:
>
>     - name: Check if openshift directory exists
>       stat:
>         path: "{{ openshift_directory }}"
>       register: openshift_dir
>       tags: [ 'destroy' ]
>
>     - name: Open Firewall Ports (AWS Lightsail)
>       delegate_to: localhost
>       args:
>         executable: /bin/bash
>       script: "./scripts/firewall.sh open {{ instance }}"
>       when:
>         - "'instance_nsg' not in hostvars[groups['vps'][0]]"
>         - "'instance_sg' not in hostvars[groups['vps'][0]]"
>         - "'docker' in services"
>         - openshift_dir.stat.exists == False
>
>     - name: Add Network Security Group rules (Microsoft Azure VM)
>       delegate_to: localhost
>       azure_rm_securitygroup:
>         name: "{{ network_security_group }}"
>         resource_group: "{{ resource_group }}"
>         rules:
>          - name: OpenShift-Tcp
>            priority: 1002
>            direction: Inbound
>            access: Allow
>            protocol: Tcp
>            source_port_range: "*"
>            destination_port_range:
>              - 80
>              - 443
>              - 1936
>              - 4001
>              - 7001
>              - 8443
>              - 10250-10259
>            source_address_prefix: "*"
>            destination_address_prefix: "*"
>          - name: OpenShift-Udp
>            priority: 1003
>            direction: Inbound
>            access: Allow
>            protocol: Udp
>            source_port_range: "*"
>            destination_port_range:
>              - 53
>              - 8053
>            source_address_prefix: "*"
>            destination_address_prefix: "*"
>        state: present
>      when:
>        - "'instance_nsg' in hostvars[groups['vps'][0]]"
>        - "'instance_sg' not in hostvars[groups['vps'][0]]"
>        - "'docker' in services"
>        - openshift_dir.stat.exists == False
>
>    - name: Add Security Group rules (AWS EC2)
>      delegate_to: localhost
>      ec2_group:
>        name: "{{ security_group }}"
>        description: OpenShift
>        vpc_id: "{{ virtual_private_cloud_id }}"
>        purge_rules: no
>        rules:
>         - proto: tcp
>           ports:
>             - 80
>             - 443
>             - 1936
>             - 4001
>             - 7001
>             - 8443
>             - 10250-10259
>           cidr_ip: 0.0.0.0/0
>           rule_desc: OpenShift-Tcp
>         - proto: udp
>           ports:
>             - 53
>             - 8053
>           cidr_ip: 0.0.0.0/0
>           rule_desc: OpenShift-Udp
>       state: present
>     when:
>       - "'instance_nsg' not in hostvars[groups['vps'][0]]"
>       - "'instance_sg' in hostvars[groups['vps'][0]]"
>       - "'docker' in services"
>       - openshift_dir.stat.exists == False
>
>   - name: Copy and Run install
>     environment:
>       PATH: "{{ ansible_env.PATH}}:{{ openshift_directory }}/../../bin"
>     args:
>       executable: /bin/bash
>     script: "./scripts/install.sh {{ ansible_ssh_host }}"
>     when:
>       - "'docker' in services"
>       - openshift_dir.stat.exists == False
>
>   - debug: msg="Please install docker to proceed."
>     when: "'docker' not in services"
>
>   - debug: msg="Install script has already been completed.  Run this playbook with the destroy tag, then run once again normally to re-intialize openshift."
>     when: openshift_dir.stat.exists == True
>
>   - name: Destroy
>     become: yes
>     environment:
>       PATH: "{{ ansible_env.PATH }}:{{ openshift_directory }}/../../bin"
>     args:
>       executable: /bin/bash
>     shell:
>       "cd {{ openshift_directory }} && oc cluster down && cd ../ && rm -rf {{ openshift_directory }}/../../../.kube {{ openshift_directory }}"
>     when: openshift_dir.stat.exists == True
>     tags: [ 'never', 'destroy' ]
>
>   - name: Close Firewall Ports (AWS Lightsail)
>     delegate_to: localhost
>     args:
>       executable: /bin/bash
>     script: "./scripts/firewall.sh close {{ instance }}"
>     when:
>       - "'instance_nsg' not in hostvars[groups['vps'][0]]"
>       - "'instance_sg' not in hostvars[groups['vps'][0]]"
>     tags: [ 'never', 'destroy' ]
>
>   - name: Delete Network Security Group rules (Microsoft Azure VM)
>     delegate_to: localhost
>     command:
>       bash -ic "az-login-sp && (az network nsg rule delete -g {{ resource_group }} --nsg-name {{ network_security_group }} -n {{ item }})"
>     with_items:
>       - OpenShift-Tcp
>       - OpenShift-Udp
>     when:
>       - "'instance_nsg' in hostvars[groups['vps'][0]]"
>       - "'instance_sg' not in hostvars[groups['vps'][0]]"
>     tags: [ 'never', 'destroy' ]
>
>   - name: Delete Security Group rules (AWS EC2)
>     delegate_to: localhost
>     command:
>       bash -c "[[ {{ item }} -eq 53 || {{ item }} -eq 8053 ]] && protocol=udp || protocol=tcp && aws ec2 revoke-security-group-ingress --group-id {{ security_group_id }} --port {{ item }} --protocol $protocol --cidr 0.0.0.0/0"
>     with_items:
>       - 80
>       - 443
>       - 1936
>       - 4001
>       - 7001
>       - 8443
>       - 10250-10259
>       - 53
>       - 8053
>     when:
>       - "'instance_nsg' not in hostvars[groups['vps'][0]]"
>       - "'instance_sg' in hostvars[groups['vps'][0]]"
>     tags: [ 'never', 'destroy' ]
> EOF

Create a shell script which will pull the latest release of client tools from GitHub, place the needed binaries in ~/.local/bin, set insecure registry on Docker and initialize (if you have went through the AWS/Terraform/Ansible/OpenShift against Lightsail article then this can be disregarded):

$ cat << 'EOF' > scripts/install.sh
> #!/bin/bash
> [[ -z $* ]] && { echo "Please specify a Public IP or Host/Domain name." && exit 1; }
> # Fetch and Install
> file_url="$(curl -sL https://github.com/openshift/origin/releases/latest | grep "download.*client.*linux-64" | cut -f2 -d\" | sed 's/^/https:\/\/github.com/')"
> [[ -z $file_url ]] && { echo "The URL could not be obtained.  Please try again shortly." && exit 1; }
> file_name="$(echo $file_url | cut -f9 -d/)"
> if [[ ! -f $file_name ]]; then
>         curl -sL $file_url --output $file_name
>         folder_name="$(tar ztf $file_name 2>/dev/null | head -1 | sed s:/.*::)"
>         [[ -z $folder_name ]] && { echo "The archive could not be read.  Please try again." && rm -f $file_name && exit 1; }
>         tar zxf $file_name
>         mv $folder_name/oc $folder_name/kubectl $HOME/.local/bin && rm -r $folder_name
>         chmod 754 $HOME/.local/bin/oc $HOME/.local/bin/kubectl
> fi
> # Docker insecure
> [[ $(grep insecure /etc/docker/daemon.json &>/dev/null; echo $?) -eq 2 ]] && redirect=">"
> [[ $(grep insecure /etc/docker/daemon.json &>/dev/null; echo $?) -eq 1 ]] && redirect=">>"
> [[ $(grep insecure /etc/docker/daemon.json &>/dev/null; echo $?) -eq 0 ]] || { sudo bash -c "cat << 'EOF' $redirect /etc/docker/daemon.json
> {
>         \"insecure-registries\" : [ \"172.30.0.0/16\" ]
> }
> EOF" && sudo systemctl restart docker; }
> # OpenShift Origin up
> [[ ! -d $HOME/.local/etc/openshift ]] && { mkdir -p $HOME/.local/etc/openshift && cd $HOME/.local/etc/openshift; } || { cd $HOME/.local/etc/openshift && oc cluster down; }
> oc cluster up --public-hostname=$1
>
> exit 0
> EOF 

Note: If you have already went through the AWS/Terraform/Ansible/OpenShift for Lightsail article or you don’t want to use Lightsail, then this can be disregarded.

The Lightsail firewall functionality is currently being implemented in Terraform and is not available in Ansible. In the interim, we will create a shell script to open and close ports needed by OpenShift Origin (using the AWS CLI). This script will be run locally via the Playbook during the create and destroy routines.

Note2: Port 80 is already open when the Lightsail host is provisioned:

$ cat << 'EOF' > scripts/firewall.sh && chmod 754 scripts/firewall.sh
> #!/bin/bash
> #
> openshift_ports="53/UDP 443/TCP 1936/TCP 4001/TCP 7001/TCP 8053/UDP 8443/TCP 10250_10259/TCP"  
> #
> [[ -z $* || $(echo $* | xargs -n1 | wc -l) -ne 2 || ! ($* =~ $(echo '\<open\>') || $* =~ $(echo '\<close\>')) ]] && { echo "Please pass in the desired action [ open, close ] and instance [ site_myweb ]." && exit 2; }
> #
> instance="$(echo $* | xargs -n1 | sed '/\<open\>/d; /\<close\>/d')"
> [[ -z $instance ]] && { echo "Please double-check the passed in instance." && exit 1; }
> action="$(echo $* | xargs -n1 | grep -v $instance)"
> #
> for port in $openshift_ports; do
>         aws lightsail $action-instance-public-ports --instance $instance --port-info fromPort=$(echo $port | cut -f1 -d_ | cut -f1  -d/),protocol=$(echo $port | cut -f2 -d/),toPort=$(echo $port | cut -f2 -d_ | cut -f1 -d/)
> done
> #
>
> exit 0
> EOF 

Run the Ansible playbook after a few minutes (accept the host key by typing yes and hitting enter when prompted):

$ ansible-playbook -i ../hosts-aws openshift.yml

Note: Disregard the warning regarding mismatch descriptions on the Security Group. This will not be modified so the original description was not exported out to be used here.

Note2: If a Terraform apply is run again after the security group modification (addition of rules for OpenShift), then those rules will be destroyed. In that case, please run a Playbook destroy then run again to reinitialize.

After a short while, log on to the instance:

$ ssh -i ~/.ssh/myweb ubuntu@<The value of static_public_ip that was reported.  One can also use 'terraform output static_public_ip' to print it again.>

To get an overview of the current project with any identified issues:

$ oc status --suggest

Log on as Admin via CMD Line and switch to the default project:

$ oc login -u system:admin -n default

Logout of the session:

$ oc logout

Please see the Command-Line Walkthrough (https://docs NULL.openshift NULL.com/enterprise/3 NULL.2/getting_started/developers_cli NULL.html).

Logout from the host:

$ logout

Log on as Admin via Web Browser (replace <PUBLIC_IP>):

https://<PUBLIC_IP>:8443/console (You will get a Certificate/Site warning due to a mismatch).

Please see the Web Console Walkthrough (https://docs NULL.openshift NULL.com/enterprise/3 NULL.2/getting_started/developers_console NULL.html).

To shut down the OpenShift Origin cluster, destroy the working folder and start anew (you can re-run the playbook normally to reinitialize):

$ ansible-playbook -i ../hosts openshift.yml --tags "destroy"

Tear down what was created by first performing a dry-run to see what will occur:

$ cd ../../terraform/aws/myweb && terraform plan -var 'lightsail=false' -destroy 

Tear down the instance:

$ terraform destroy -var 'lightsail=false' -auto-approve

<–

References:
how-to-install-openshift-origin-on-ubuntu-18-04 (https://www NULL.techrepublic NULL.com/article/how-to-install-openshift-origin-on-ubuntu-18-04)

Source:
ansible_openshift (https://github NULL.com/pershoot/ansible_openshift)