Wednesday, August 31, 2016

OpenShift Cluster Up on Fedora

Looking for a quick way to get an OpenShift Origin instance up and running quickly on your local laptop?  Look no further. 'oc cluster up' is here.  Check out the documentation here which points you here for the actual client bits.  Let's get started.

A quick scan of the environment before running 'oc cluster up' so I know what I'm getting.

$ cat /etc/fedora-release
Fedora release 24 (Twenty Four)

$ docker --version
Docker version 1.10.3, build 1ecb834/1.10.3

$ docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES

$ docker images

REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE

Grab the latest client, untar it, change into the proper directory and get the version.

$ wget

$ tar xzvf openshift-origin-client-tools-v1.3.0-alpha.3-7998ae4-linux-64bit.tar.gz

$ cd openshift-origin-client-tools-v1.3.0-alpha.3-7998ae4-linux-64bit/

$ ./oc version
oc v1.3.0-alpha.3
kubernetes v1.3.0+507d3a7
features: Basic-Auth GSSAPI Kerberos SPNEGO

Start the cluster.

Friday, August 5, 2016

Fedora Flock - 2016 - Day 4 - Last Day

Day 3, last day, today only has 2 sessions that I was going to attend.  I started out by going to an "Ansible best practices Working Session" by Michael Scherer.  The goal was to cover Ansible basics, best practices, and how they apply those to the Fedora Infrastructure.  One example he used was checking the checksums on files before you replace them and restart services with Ansible.  In particular, ssh config files.  You can imagine if you restart ssh on your clusters across datacenters and you break ssh... no more Ansible.  Another best practice is to leverage the pkg module which can determine which package manager is being used by the host and adjust accordingly.  The third best practice was to be careful on how you assign variables.  Try to use local when possible.The Fedora Infrastructure team keeps thier ansible code here  Michael spent quite a bit of time covering the organization of their playbooks, roles, groups, tasks, handlers.  A couple of tools that might be helpful:
Michael also walked through a playbook in detail with the upstream maintainer of the mailman package.  There were quite a few best practices thrown around for that one.

After the session Michael and I had a chance to sit down and talk about the OpenShift Origin deployment for the cloud working group.   Some decisions need to be made:
  • Deploy Origin containers on Fedora Atomic or Fedora 24
  • Set some expectations that this may be redeployed while we are learning?
  • Bare metal or OpenStack?
  • Storage for registry?  Swift (not in Fedora?)  NFS (hope not)
  • Architecture: What do we want to deploy?  Doesn't have to be production quality.
So that's the great thing about coming to a conference like this.  Get a chance to put some faces to names and talk about fun, important projects.  Flock is now over and my overall impression is that this conference was run very well.  Lots of activities, food was great, people were great, sessions were great.  I'm looking forward to my next Flock.

Thursday, August 4, 2016

Fedora Flock - 2016 - Day 3

Third day!  Before I get started on my session logging today, check out the picture of all the attendees at flock that we took last night before the cruise of Krakow.

Today we started with lightening talks for an hour.  I was second up and presented +OpenShift on +Fedora Project.  That was my first time presenting a lightening talk and my first time attending other lightening talks.  I really like the format for both.  You'd be surprised at how much material you can cover in 5 minutes.

Today is also hack session day.  The sessions I am attending are "Building a Fedora Containers Library", "OpenShift on Fedora", and "Fedora PRD Workshop".  The sessions are two hours each.

+Josh Berkus kicked off "Building a Fedora Containers Library" with a slide that had instructions to git clone the lab material.  That's the proper way to start a workshop :). Josh walked us through building a +PostgreSQL image step by step with lots of best practices discussed along the way.  This session was particularly insightful because Josh is well... extremely knowledable on PostgreSQL.  That knowledge coupled with his Docker chops translated into an outstanding session. Great hack session.

Next was "OpenShift on Fedora", a hack session led by Maciej Szulik.  The material for the lab is located here.  We started out by leveraging vagrant to spin up an environment that we could issue an "oc cluster up" in, which spins up everything you need to get started.  The lab consisted of deploying pods, exploring pods, services, replication controllers, etc.  Maciej did a great job explaining some concepts in OpenShift that I wasn't really getting.  Such as deployment configs, image streams, horizontal scaling.  I didn't quite finish with the lab but the good thing is you can take it with you in the Vagrant box and the material on github.  Great hack session.

Wednesday, August 3, 2016

Fedora Flock - 2016 - Day 2

Day 2 starts soon.  Again, this will be high level notes from each session that I attend.  I'm quite sure that I won't capture everything.  Head here for my Day 1 notes.

Today started out with "Continuous Integration and the Glorious Future".  Tim kicked it off with some CI history - dev, dev, dev, then integration.  That didn't work to well. Provided some nice perspective that I hadn't had before.  Tim also provided a current state of the union with Fedora automation and items that are in progress including build automations, build self-tests, and automated deployments.  Some of the items that need work are presentation of data and results, keeping the builds fast.  More great perspective on the feedback loop and what he wants out of it: how long after package is updated can a new compose be generated, how long after compose is built until the tests are run.  How long after the tests are run untl the developer is notified of success or failure.  The QA team is also evaluating how to enable contributors to write thier own automated tests.  Nonstop Fedora.  Tim covered quite a bit more on the Why and How during his presentation. Great presentation.

Next up was "Modularity: Why, where we are, and how to get involved" by +Langdon White.  Langdon kicked off by covering some history which dated back to the "Rings Proposal".  starting from "JeOS" which would be highly curated to the outer rings which are no so curated.  He provided some great analogies about how a one size doesn't fit all - comparing to the lifecycle of packages and how they don't align with other packages.  Then he moved into modules:

  • A module is a thing that's managed as a logical unit.
  • A module is a thing that promises an external, unchanging API
  • A module is a thing that may have many, unexposed binary artifacts to support the external API
  • A module may "contain" other modules and is referred to as a "module stack"

The process: inputs -> activities -> outputs -> outcomes -> impact.

We saw an example of a module input file which explained references, profiles, components, and filters.

Progress thus far is an established Modularity WG, implemented a dnf plugin, implemented an alpha version of module build pipeline, ability to coalesce modules for testing, and kicked off a base-runtime.

Tuesday, August 2, 2016

Fedora Flock - 2016 - Day 1

So this is my first +Fedora Project Flock conference. I arrived in Krakow yesterday from Austin Texas.  The folks who put Flock together did a great job with this event.  I have never been to Krakow before, and they clearly communicated how you get around, which buses / trains to take, how to buy tickets, everything.  Kudos to that team.  I had a few reasons to come to Flock, I wanted to put some faces to names that I have been working with over the years.  I wanted to meet with the members of the Fedora Cloud group that I have been participating in, and I wanted to attend technical sessions and see what's coming up in the distro.

My schedule is listed here.  I'll blog each day that I'm here to share the experience.  Hopefully you will find it interesting enough to attend the next one if you ddn't get a chance to come to this Flock event.  I'll give an overview of each session that I attend.  I know I won't capture all the details from each session that I attend, but it's a taste.  The sessions are recoreded and will be posted to the Fedora youtube channel.

Day 1. 

Introduction from Joe B. to thank sponsors: Red Hat, Unix Stickers, SuSE, The Linux Foundation, stickermule.  Thanks sponsors!  Keep in mind though, Flock is a confernece that is run and led by contributors - for contributors.  I can tell there was a ton of work done behind the scenes to make this event happen.

Then the keynote by +Matthew Miller.  Matt covered some of the numbers that show Fedora is gaining steam in the cloud and developer space, among many others.  He also talked about a few of the major goals for 2016.  It's cool to see that the +Fedora Project has some big plans to continue moving forward in the cloud space.  Think items like Fedora Atomic, OpenShift and Flatpak.

Friday, May 27, 2016

OpenShift Origin on Fedora 24 on AWS - Wow.

So, this all started because I was just doing a little Friday tinkering and wanted to see how easy it is to get OpenShift Origin installed on Fedora... on AWS.  Well, it turns out, it's really, really easy.  So easy, in fact, that I decided to write it down here and share it with you. This will be the first of a few blog posts about running OpenShift Origin on Fedora.  This post details how to get OpenShift Origin running on a single instance of Fedora 24. This is also a manual configuration.  In future blog posts, I'll talk about how to set up a highly available OpenShift Origin install on Fedora.  In addition, I'll talk about how to consume AWS resources like ELBs, IAM, S3, route53, ec2 instances, etc...  Just maybe, I'll go into how to automate the deployments with the AWS CLI.  Feel free to leave some comments on just how far you want to go here.  I promise, it will be fun.

 I learned quite a bit during this process, namely:

  • You can easily find and use Fedora images in the AWS community AMIs.
  • OpenShift Origin has been packaged for Fedora 24 - who doesn't like new?
  • It's easy to install the OpenShift Origin PaaS and get started.
The goal was to get Origin running on AWS, launch an application, and hit that app from my browser.  There's no real pre-requisites to get started here other than an AWS account with the proper permissions.  I do happen to have a DNS name managed by AWS route53 which helps a bit.  I also have some prior knowledge of how AWS works.

Let's chat a bit about what I'm using, what I had set up before this, and what I had to do to meet my goal.  I am using:

  • Fedora AMI with the ID of ami-0a09e667 (Fedora-Cloud-Base-24-20160512.n.0.x86_64-us-east-1-HVM-standard-0). 
  • For my testing, I'm using a m4.2xlarge instance of that AMI.
  • I had an existing VPC that I launched the Fedora 24 instance into.  The only things to know about that is that I have DNS hostnames enabled on that VPC.
  • I have an existing subnet in that VPC that I launched this into.
  • I have an existing route table in that VPC with an internet gateway defined so my instance can get out.
  • I created a new security group on instance launch for testing this.

I do need to prep AWS a bit before moving on.  I'll use the AWS CLI to do this.  I do have an AWS CLI cheat sheet that may help if you have questions about querying resources, launching resources, describing, etc.. Have a look.  To move forward, I need to know what OpenShift Origin needs.  I found that the OpenShift Origin documentation is great. Please have a look if you have any questions.  That's what I did. I went to the docs | installing | prerequisites and started there.  I'll just walk through the prerequisites here and share what I did.

Thursday, May 26, 2016

Testing out AWS ssm

I was poking around the AWS CLI and testing out different features / functionality.  Amazons ssm caught my eye.  I decided to have a look at the remote functionality offered by this tool.  I'm consolidating all the notes I found in different resources here, to do a simple test.  Here's a high level overview of what it took me to get this configured and working properly:

1. Create a role and policy and assign that to an EC2 instance at launch time. You can't assign it to a running instance. The policy I assigned to the role that I attached to the instance is called: AmazonEC2RoleforSSM

2. Assign permissions to the user that will be executing the commands. The name of the policy is: AmazonSSMFullAccess

Of course, for your environment, make sure you adhere to your security requirements.  There are better ways to restrict this.

3. Deploy the instance and install the ssm agent.  You can either install the agent by passing  user-data or manually afterwards.  It's a a simple rpm package.

4. Create a policy document, mine was:
     "schemaVersion": "1.2",
     "description": "Check ip configuration of a Linux instance.",
     "parameters": {
     "runtimeConfig": {
       "aws:runShellScript": {
       "properties": [
       "id": "",

From the examples here:

Sunday, May 22, 2016

Amazon Web Services Command Line Interface (AWS CLI) - Cheat Sheet

I have been standing up quite a bit of infrastructure in AWS lately using the AWS CLI.  Here are some commands that I found helpful in a cheat sheet format. I'll show you how to create resources, query resources for information and how to update resources. Hopefully this will get you started quickly. The cheat sheet covers the following topics:

  • Setting up your environment.
  • Working with Virtual Private Clouds (VPC).
  • Working with Identity and Access Management (IAM).
  • Working with Route53.
  • Working with Elastic Load Balancers (ELB).
  • Working with SSH.
  • Working with DHCP.
  • Working with Elastic Compute Cloud (EC2).
  • Utilizing queries to gather information.

You can preview the AWS CLI cheat sheet by clicking below (hover mouse over upper right corner):

You can test all these commands with Fedora images which can be launched here:

If you have any questions about any of the commands in particular, please drop a comment below and I'll try to help.  Much credit goes to Ryan Cook for frontloading a lot of this.

Wednesday, April 6, 2016

Grabbing a list of VMs from RHEV and Sorting

Simple post, but I thought it'd be worth sharing since I burned a day on it.  The goal was to find out which VMs on our RHEV environment were old and unused.  So I decided to use the RHEV-M API to grab the list, and sort it.  The only thing you need is the CA Cert for your RHEV-M environment.

Script here:


# Set the variables for date, and argument
DATE=$(date +"%m_%d_%Y-%M")

# Grab the password for RHEV-M, don't report it to std out.
echo "Please provide the RHEVM password, password is not echoed out to stdout, enter password and press Enter."
read -p "Enter Password:" -s RHEVM_PASSWORD

# Grab the xml report of all the VMs
curl -s -X GET -H "Accept: application/xml" -u "admin@internal:$RHEVM_PASSWORD" --cacert rhevm.cer > vm-output-$DATE.xml

# Parse the xml output and look for the name of the VM, and the stop time of the VM, put it in a separate file.
xpath vm-output-$DATE.xml '/vms/vm/name | /vms/vm/stop_time' > vm-output-$DATE-formatted.xml 2> /dev/null

# Clean up the file here.  joherr helped out with this.  Place line breaks after each </stop_time> xml tag, and format it so it's readable in two columns.
sed -e 's/<\/name><stop_time>/ /g' \
-e 's/<\/stop_time><name>/\n/g' \
-e 's/<name>//g' \
-e 's/<\/stop_time>//g' vm-output-$DATE-formatted.xml | \
    sort -k 2 | \
    awk 'BEGIN { format = "%-60s %s\n"
            printf format, "VMs", "Date Stopped"
            printf format, "----------", "----------" }
        { printf format, $1, $2 }' > rhevm-vms-$DATE

# By default, output the number of VMs that are listed.
echo "There are $(cat rhevm-vms-$DATE | wc -l) VMs now."

# If it's run with a -p, ouput the entire list and sort by oldest first.
case $INPUT in
    cat rhevm-vms-$DATE
    shift # past argument

shift # past argument or value

Output here:
VMs                                                          Date Stopped
----------                                                   ----------
dh-ose-node2                                                 2014-10-23T16:42:27.045-05:00
ospceph-sft                                                  2014-11-10T21:01:23.524-06:00
dh-ose-broker                                                2014-11-11T16:32:59.985-06:00
ks-sft-test1                                                 2014-11-13T21:00:02.828-06:00
dh-ose-node1                                                 2014-11-24T19:02:53.995-06:00
collier-atomic-pxe                                           2014-12-18T15:01:45.325-06:00
sat6-pxe-rhel7                                               2015-03-05T10:54:13.907-06:00
sat6-pxe-rhel6                                               2015-03-05T10:54:14.401-06:00
rhel-atomic-7.1-GA-mjenner                                   2015-03-05T10:54:14.489-06:00
workstation-goern-1                                          2015-04-16T07:59:47.704-05:00
RHEL-Atomic-Test-Sat6                                        2015-05-29T11:42:58.093-05:00
hk-nfv                                                       2015-09-29T16:36:00.975-05:00
ks-back                                                      2015-09-29T16:36:01.851-05:00
rhel-atomic-mjenner                                          2015-09-29T16:36:02.026-05:00
dellaccess                                                   2015-09-29T16:36:02.785-05:00
collier-atomic-pxe-1                                         2015-09-29T16:36:03.663-05:00

Now I have a decent idea of what VMs are out there, which ones haven't been powered on for months, and are candidates for deletion. Hope this helps.

Wednesday, July 8, 2015

Configure a Highly Available Kubernetes / etcd Cluster with Pacemaker on Fedora

I'm going to share some of the great work that Matt Farrellee, Rob Rati and Tim St. Clair have done with regards to figuring out $TOPIC - they get full credit for the technical details here.  It's really interesting work and I thought I'd share it with the upstream community.  Not to mention it gives me an opportunity to learn how this is all set up and configured.

In this configuration I will set up 5 virtual machines and one VIP:

If you are wondering how I set up this environment quickly and repetitively, check out omv from Purpleidea.  He's a clever guy with a great dev workflow.  In particular, have a look at the work he has done to put his great code into a package to make distribution easier.

In summary here, I used Vagrant, KVM and omv to build and destroy this environment.  I won't go into to many details about how that all works, but feel free to ask questions in the comments if needed.  My omv.yaml file is located here, this might help you get up and running quickly.  Just make sure you have a Fedora 22 Vagrant box that matches the name in the file.  Yup, I run it all on my laptop.

Global configuration:

  • Configure /etc/hosts on all nodes so that name resolution works (omv can help here)
  • Share SSH key from master to all other nodes