Pages

Wednesday, July 8, 2015

Configure a Highly Available Kubernetes / etcd Cluster with Pacemaker on Fedora

I'm going to share some of the great work that Matt Farrellee, Rob Rati and Tim St. Clair have done with regards to figuring out $TOPIC - they get full credit for the technical details here.  It's really interesting work and I thought I'd share it with the upstream community.  Not to mention it gives me an opportunity to learn how this is all set up and configured.

In this configuration I will set up 5 virtual machines and one VIP:

fed-master1.example.com 192.168.123.100
fed-master2.example.com 192.168.123.101
fed-master3.example.com 192.168.123.102
fed-node1.example.com 192.168.123.103
fed-node2.example.com 192.168.123.104
fed-vip.example.com 192.168.123.105

If you are wondering how I set up this environment quickly and repetitively, check out omv from Purpleidea.  He's a clever guy with a great dev workflow.  In particular, have a look at the work he has done to put his great code into a package to make distribution easier.

In summary here, I used Vagrant, KVM and omv to build and destroy this environment.  I won't go into to many details about how that all works, but feel free to ask questions in the comments if needed.  My omv.yaml file is located here, this might help you get up and running quickly.  Just make sure you have a Fedora 22 Vagrant box that matches the name in the file.  Yup, I run it all on my laptop.

Global configuration:

  • Configure /etc/hosts on all nodes so that name resolution works (omv can help here)
  • Share SSH key from master to all other nodes

Tuesday, June 30, 2015

Running Kubernetes in Offline Mode

Here I'll talk about how to run kubernetes on a flight that doesn't have wifi... or, Red Hat Summit hands on lab that is completely disconnected.  In either case, to set some context, this is useful for me while I'm running on a single host kubernetes configuration for a lab or development where network access is limited or non-existent.

The issue is that K8s tries to pull the pause container whenever it launches a pod.  As such, it tries to connect to gcr.io and make a connection to download the pause image. The gcr.io is the Google Container Registry.  When you are in a disconnected environment this will cause the pod to enter a state of pending until it can pull down the pause container. 

Here's what you can do to bypass that - at least the only thing I know you can do: pull the pause container ahead of time.  It helps if you know you'll be in an environment with limited access ahead of time. 

       
# docker pull gcr.io/google_containers/pause
Trying to pull repository gcr.io/google_containers/pause ...
6c4579af347b: Download complete 
511136ea3c5a: Download complete 
e244e638e26e: Download complete 
Status: Downloaded newer image for gcr.io/google_containers/pause:latest




# docker images
REPOSITORY                       TAG                 IMAGE ID            CREATED             VIRTUAL SIZE
fedora/apache                    latest              1eff270e703a        7 days ago          649.7 MB
gcr.io/google_containers/pause   1.0                 6c4579af347b        11 months ago       239.8 kB
gcr.io/google_containers/pause   go                  6c4579af347b        11 months ago       239.8 kB
gcr.io/google_containers/pause   latest              6c4579af347b        11 months ago       239.8 kB


Saturday, June 27, 2015

Extending Storage on an Fedora Atomic Host

I had to spend some time understanding how to use docker-storage-setup on an Atomic host. The tool docker-storage-setup comes by default and makes the configuration of storage on your Atomic host easier. I didn't read any of the provided documentation (although that probably would have helped) other than the script itself.  So, pardon me if this is a duplicate of other info out there.  It was a great way to learn more about it.  The goal here is to add more disk space to an Atomic host.  By default, the cloud image that you download has one device (vda) that is 6GB in size.  When I'm testing many, many docker builds and iterating through the Fedora-Dockerfiles repo, that's just not enough space.  So, I need to know how to expand it.

To provide some context about my environment, I'm using a local KVM environment to hack around in.  The first thing I'll do is go ahead and add a few extra disks to my environment so I can do some testing of docker-storage-setup.  Here is what we will be modifying on our running Atomic VM:

My VM is called: atomic1
New disk 1: vdb (logical name presented to VM)
New disk 2: vdc (logical name presented to VM)
New disk 3: vdd (logical name presented to VM)

As with anything you do regarding storage, make sure you have a backup.

Here is what it looks like on the Atomic VM before I add my disks:

       

# atomic host status
  TIMESTAMP (UTC)         VERSION   ID             OSNAME            REFSPEC                                                
* 2015-06-27 20:22:47     22.50     0eca6e0777     fedora-atomic     fedora-atomic:fedora-atomic/f22/x86_64/docker-host     
  2015-05-21 19:01:46     22.17     06a63ecfcf     fedora-atomic     fedora-atomic:fedora-atomic/f22/x86_64/docker-host     


# fdisk -l | grep vd
Disk /dev/vda: 6 GiB, 6442450944 bytes, 12582912 sectors
/dev/vda1  *      2048   616447   614400  300M 83 Linux
/dev/vda2       616448 12582911 11966464  5.7G 8e Linux LVM

Wednesday, May 6, 2015

How to Contribute to the "Container Best Practices Guide"

Hey there.  We are starting a new best practices guide for containers!  We'll cover tips and tricks for running containers on Fedora (rkt or Docker), CentOSRed Hat Enterprise Linux and Atomic.  Some of the topics will cover items from how to build single app containers running on a single host to building containers with the intention of orchestrating them across multiple hosts with a higher level tool like OpenShift and or Kubernetes.

Right now we are just getting started with this consolidation of container knowledge effort.  Please feel free to have a look at the Github repo and contribute by submitting a pull request.  The guide will be written in asciidoc so it's going to be very easy to contribute to.  There are three ways to render the asciidoc files into PDF or HTML format:

  • Install the appropriate packages (git asciidoc dockbook-xsl fop make) on your Fedora host
  • Build your own container-best-practices (click the link to get the Dockerfile) image and do the processing inside the container
  • Pull the trusted image from the Fedora account on the Docker registry by issuing a "docker pull fedora/container-best-practices"

Wednesday, March 18, 2015

Syntax highlighting for asciidoc

Cool tip to track here.

http://www.methods.co.nz/asciidoc/userguide.html#_vim_syntax_highlighter



To enable syntax highlighing:
  • Put a Vim autocmd in your Vim configuration file (see the example vimrc file).
  • or execute the Vim command :set syntax=asciidoc.
  • or add the following line to the end of you AsciiDoc source files:
    // vim: set syntax=asciidoc:

Tuesday, January 13, 2015

Flannel and Docker on Fedora - Getting Started

Lets set up 3 Fedora servers for the purposes of testing flannel on Fedora. These can be bare metal, VMs (on KVM, VMware, RHEV, etc...). Why do we want to test this? This is to demonstrate setting up the flannel overlay network and confirming connectivity. Specifically, I want to test container connectivity across hosts.  I'd like to make sure that container A on host A can talk to container B on host B. I received quite a bit of guidance from Jeremy Eder of breakage.org - Thanks for the tips!

Our 3 Flannel hosts:

fed-master 192.168.121.105
fed-minion1 192.168.121.166
fed-minion2 192.168.121.108

A few setup notes: I haven't looked at this on GCE or AWS. It helps to add the hosts to /etc/hosts, or have some other DNS solution. In my case, I set up these VM's in Vagrant on my laptop and modified /etc/hosts.

Software used on these Fedora hosts.
       
# rpm -qa | egrep "etc|docker|flannel"
flannel-0.2.0-1.fc21.x86_64
docker-io-1.4.0-1.fc21.x86_64
etcd-0.4.6-6.fc21.x86_64


On fed-master:
Look at networking before flannel configuration.
       
# ip a


Start etcd on fed-master.
       
# systemctl start etcd; systemctl status etcd