Friday, May 27, 2016

OpenShift Origin on Fedora 24 on AWS - Wow.

So, this all started because I was just doing a little Friday tinkering and wanted to see how easy it is to get OpenShift Origin installed on Fedora... on AWS.  Well, it turns out, it's really, really easy.  So easy, in fact, that I decided to write it down here and share it with you. This will be the first of a few blog posts about running OpenShift Origin on Fedora.  This post details how to get OpenShift Origin running on a single instance of Fedora 24. This is also a manual configuration.  In future blog posts, I'll talk about how to set up a highly available OpenShift Origin install on Fedora.  In addition, I'll talk about how to consume AWS resources like ELBs, IAM, S3, route53, ec2 instances, etc...  Just maybe, I'll go into how to automate the deployments with the AWS CLI.  Feel free to leave some comments on just how far you want to go here.  I promise, it will be fun.

 I learned quite a bit during this process, namely:

  • You can easily find and use Fedora images in the AWS community AMIs.
  • OpenShift Origin has been packaged for Fedora 24 - who doesn't like new?
  • It's easy to install the OpenShift Origin PaaS and get started.
The goal was to get Origin running on AWS, launch an application, and hit that app from my browser.  There's no real pre-requisites to get started here other than an AWS account with the proper permissions.  I do happen to have a DNS name managed by AWS route53 which helps a bit.  I also have some prior knowledge of how AWS works.

Let's chat a bit about what I'm using, what I had set up before this, and what I had to do to meet my goal.  I am using:

  • Fedora AMI with the ID of ami-0a09e667 (Fedora-Cloud-Base-24-20160512.n.0.x86_64-us-east-1-HVM-standard-0). 
  • For my testing, I'm using a m4.2xlarge instance of that AMI.
  • I had an existing VPC that I launched the Fedora 24 instance into.  The only things to know about that is that I have DNS hostnames enabled on that VPC.
  • I have an existing subnet in that VPC that I launched this into.
  • I have an existing route table in that VPC with an internet gateway defined so my instance can get out.
  • I created a new security group on instance launch for testing this.

I do need to prep AWS a bit before moving on.  I'll use the AWS CLI to do this.  I do have an AWS CLI cheat sheet that may help if you have questions about querying resources, launching resources, describing, etc.. Have a look.  To move forward, I need to know what OpenShift Origin needs.  I found that the OpenShift Origin documentation is great. Please have a look if you have any questions.  That's what I did. I went to the docs | installing | prerequisites and started there.  I'll just walk through the prerequisites here and share what I did.

For DNS, I met the requirements by creating a hosted zone in route53.  Then I created a record in route53 for my host which pointed to the public IP associated to it.  I also created a wildcard DNS record for the applications running on my host.  See below to create a zone, and the records that I need.

For the ports, since this is a single node install, I just created a security group and opened up inbound 443, 22 and 80.

For persistent storage, I just launched an instance with a large root device and I added another device of 50G for Docker storage.  In fact, let's get to that part, so we can continue with the install.

We have to do is launch a Fedora 24 instance.  Here's my command to launch the instance:

aws ec2 run-instances --image-id ami-0a09e667 --instance-type m4.2xlarge --subnet-id subnet-de0axxx --security-group-ids sg-9b7xxxx --block-device-mappings file://master/fedora-ebs-config.json --key-name scollier-test --iam-instance-profile Name=scollier-ebs-profile

Where the contents of the block device mappings file are:

    "DeviceName": "/dev/xvdb",
    "Ebs": {
    "DeleteOnTermination": true,
    "VolumeType": "gp2",
    "VolumeSize": 50

So, then the instance is launched.  The next thing I want to do is log into that instance and start configuring.  That's where the fun part starts.

After the instance initializes, you can connect to it.  In my case, I use:

ssh -i scollier-test.pem

Now I'm in!  I can start my work.  I want to go ahead and update the instance, reboot, and install few packages per the prereqs (and a few more for Fedora 24).

dnf -y update && dnf -y install wget git net-tools bind-utils iptables-services bridge-utils bash-completion ansible python-dnf dbus-python python3-dbus libsemanage-python3 libsemanage-python

I continued with the install, installed Docker.  I did skip setting up proper storage for Docker, I'm just poking around for now.  Once I finished with the prereqs, I had to pick my install option.  I chose advanced so I could see exactly what was going on under the hood.

At this point, I'll hop off the AWS instance and go back to my local Fedora desktop to install. I cloned the openshift-ansible git repo and created the following ansible hosts file:

git clone
cat /etc/ansible/hosts
# Create an OSEv3 group that contains the masters and nodes groups

# Set variables common for all OSEv3 hosts
# SSH user, this user should allow ssh based auth without requiring a password

# If ansible_ssh_user is not root, ansible_sudo must be set to true


# uncomment the following to enable htpasswd authentication; defaults to DenyAllPasswordIdentityProvider
#openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': '/etc/origin/master/htpasswd'}]

# host group for masters

# host group for nodes, includes region info
[nodes] openshift_node_labels="{'region': 'infra', 'zone': 'default'}" openshift_node_labels="{'region': 'primary', 'zone': 'east'}"

The important changes to note in the Ansible inventory file are that I enabled the SSH user, and I enabled ansible_sudo=true.  This allows me to connect to the instance via the default fedora user and complete the install. I also have the following in my ~/.ssh/config file:

     StrictHostKeyChecking no
     ProxyCommand               none
     CheckHostIP                no
     ForwardAgent               yes
     IdentityFile               /home/scollier/x/x/x/scollier-test.pem

After all that is set up, I can run the playbook:

ansible-playbook /home/x/x/openshift-ansible/playbooks/byo/config.yml

Now, we sit back and wait while the install completes.  After the install is complete, I need to deploy a router.

oadm ca create-server-cert --signer-cert=$CA/ca.crt --signer-key=$CA/ca.key --signer-serial=$CA/ca.serial.txt --hostnames='*' --cert=cloudapps.crt --key=cloudapps.key
cat cloudapps.crt cloudapps.key $CA/ca.crt > cloudapps.router.pem
oadm router router --replicas=1 --default-cert=cloudapps.router.pem --credentials='/etc/origin/master/openshift-router.kubeconfig' --service-account=router
oc get pods

I also need to mark the node schedulable.

oadm manage-node ip-10-30-1-231.ec2.internal --schedulable

Now I can create an application.  So I access the OpenShift management console.

I can log in with anyone by default.  So I do that, then I can create a new project.  I decide to just launch an ephemeral Jenkins app.

After launching the app, I expose the app via a service.

oc expose svc/jenkins

I take the defaults and let it build.  Then I test hitting the interface.

Now, I click on the URL provided and log into my new Jenkins app.

That was easy.  Now I can continue evaluating OpenShift Origin.  Fun eh?  As mentioned before, I'll be diving a bit deeper in follow up posts.  Stay tuned.

1 comment:

  1. Nice post, not clear about expose port , openshift should take care automatically.

    Anyway looking for cluster setup.