tag:blogger.com,1999:blog-21775863791224890262024-03-13T22:22:59.487-05:00Notes on tech, mostly... Unknownnoreply@blogger.comBlogger51125tag:blogger.com,1999:blog-2177586379122489026.post-27450377910893906132020-04-01T12:54:00.002-05:002020-04-01T12:54:24.356-05:00DNS on AWS / GCPI have a zone that's hosted by AWS route53 called sysdeseng.com. My goal was to create a few machines on GCP and have them resolve to a delegated subdomain. For example:<div>
<br /></div>
<div>
test.scollier-gcp.sysdeseng.com</div>
<div>
<br /></div>
<div>
These are the steps required to do this:</div>
<div>
<br /></div>
<div>
1. Create the zone on "GCPs Network Services", "Cloud DNS"</div>
<div>
<ul>
<li>Give it a Zone name</li>
<li>Give it a DNS name: scollier-gcp.sysdeseng.com</li>
<li>Provide a description</li>
<li>Click Create</li>
<li>Note the name records, for example:</li>
<ul>
<li class="p6n-dns-rrdata-row p6n-dns-long-user-content" ng-repeat="rr in rrset.rrDatas">ns-cloud-a1.googledomains.com. </li>
<li class="p6n-dns-rrdata-row p6n-dns-long-user-content" ng-repeat="rr in rrset.rrDatas"> ns-cloud-a2.googledomains.com. </li>
<li class="p6n-dns-rrdata-row p6n-dns-long-user-content" ng-repeat="rr in rrset.rrDatas"> ns-cloud-a3.googledomains.com. </li>
<li class="p6n-dns-rrdata-row p6n-dns-long-user-content" ng-repeat="rr in rrset.rrDatas"> ns-cloud-a4.googledomains.com. </li>
</ul>
</ul>
<div>
2. Go to AWS route53 and create a NS record for this zone under the sysdeseng.com domain.</div>
</div>
<div>
<ul>
<li>Click on the sysdeseng.com zone in route53</li>
<li>Create a record set</li>
<ul>
<li>On the right hand side, provide the name: scollier-gcp</li>
<li>Change the type to NS</li>
<li>Copy the nameservers from GCP and paste into the NS record.</li>
<li>Click Create</li>
</ul>
</ul>
<div>
3. Create the A record on GCP</div>
</div>
<div>
<ul>
<li>Return to GCP</li>
<ul>
<li>Go to "VPC Network", then "External IP Addresses"</li>
<ul>
<li>Create an external IP address, note it</li>
</ul>
<li>Go back to "GCPs Network Services", "Cloud DNS" and click the zone</li>
<li>Add a record set</li>
<ul>
<li>Give it a DNS Name</li>
<li>Provide the external IP address </li>
<li>Click Create</li>
</ul>
</ul>
</ul>
<div>
4. Test that it works</div>
</div>
<div>
<ul>
<li>Go to Linux terminal</li>
</ul>
$ dig +short testing.scollier-gcp.sysdeseng.com.</div>
<div>
34.67.155.244</div>
<div>
<br /></div>
<div>
<div>
$ dig +short SOA scollier-gcp.sysdeseng.com</div>
<div>
ns-cloud-a1.googledomains.com. cloud-dns-hostmaster.google.com. 1 21600 3600 259200 300</div>
</div>
<div>
<br /></div>
<div>
<div>
$ dig +short SOA sysdeseng.com</div>
<div>
ns-679.awsdns-20.net. awsdns-hostmaster.amazon.com. 1 7200 900 1209600 86400</div>
</div>
<div>
<br /></div>
Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-2177586379122489026.post-77649648059691423542017-06-01T17:12:00.001-05:002017-06-01T17:12:40.957-05:00Kicking the tires of Prometheus using Docker on FedoraStraight from the <a href="https://prometheus.io/" target="_blank">Prometheus</a> documentation: "Prometheus is an open-source systems monitoring and alerting toolkit originally built at SoundCloud."<br />
<div>
<br /></div>
<div>
I haven't had a chance to even take a look at using Prometheus. Here I'll go over the steps I had to follow to get a working local Prometheus install monitoring my local Docker daemon so I could see metrics through the Prometheus dashboard.</div>
<div>
<br /></div>
<div>
First things first, here are the versions of what I am using (eh, until we find out what the problem is (listed below)):</div>
<div>
<ul>
<li>Fedora 25</li>
<li>Docker</li>
<ul>
<li>docker-1.12.6-6.gitae7d637.fc25.x86_64</li>
<li>docker-common-1.12.6-6.gitae7d637.fc25.x86_64</li>
<li>docker-latest-1.12.6-2.git51ef5a8.fc25.x86_64</li>
</ul>
<ul></ul>
<li>Prometheus</li>
<ul>
<li>prom/prometheus b0195cb1a666</li>
</ul>
</ul>
<div>
So, there were a couple of places I went for documentation to get started:</div>
</div>
<div>
<br /></div>
<div>
Prometheus</div>
<div>
<a href="https://prometheus.io/docs/introduction/getting_started/" target="_blank">https://prometheus.io/docs/introduction/getting_started/</a></div>
<div>
<br /></div>
<div>
Docker</div>
<div>
<a href="https://docs.docker.com/engine/admin/prometheus/" target="_blank">https://docs.docker.com/engine/admin/prometheus/</a></div>
<div>
<br /></div>
<div>
So, following those docs, I tried to use the default Fedora Docker configuration. That did not work. The Docker documentation was off, at least for the version of Docker I am using. By default, in Fedora, you get a Docker package that is a bit out of date. Here are the steps I took and what I had to do as a workaround.<br />
<br />
<a name='more'></a></div>
<div>
<br /></div>
<div>
First, grab the latest version of Prometheus from <a href="https://hub.docker.com/r/prom/prometheus/" target="_blank">Docker hub</a>.</div>
<div>
<br /></div>
<div>
<pre style="background-color: #eeeeee; border: 1px dashed #999999; color: black; font-family: Andale Mono, Lucida Console, Monaco, fixed, monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"><code style="color: black; word-wrap: normal;">$ sudo docker pull prom/prometheus</code></pre>
</div>
<div>
<br /></div>
<div>
Start the Prometheus container and test dashboard access.</div>
<div>
<br /></div>
<div>
Create a /tmp/prometheus.yml file with the following contents:</div>
<div>
<br /></div>
<div>
<div>
<pre style="background-color: #eeeeee; border: 1px dashed #999999; color: black; font-family: Andale Mono, Lucida Console, Monaco, fixed, monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"><code style="color: black; word-wrap: normal;"># my global config
global:
scrape_interval: 15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
# scrape_timeout is set to the global default (10s).
# Attach these labels to any time series or alerts when communicating with
# external systems (federation, remote storage, Alertmanager).
external_labels:
monitor: 'codelab-monitor'
# Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
rule_files:
# - "first.rules"
# - "second.rules"
# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
# The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
- job_name: 'prometheus'
# metrics_path defaults to '/metrics'
# scheme defaults to 'http'.
static_configs:
- targets: ['localhost:9090']
- job_name: 'docker'
# metrics_path defaults to '/metrics'
# scheme defaults to 'http'.
static_configs:
- targets: ['localhost:9323']</code></pre>
</div>
</div>
<div>
<br /></div>
<div>
Run the Docker container</div>
<div>
<br /></div>
<div>
<pre style="background-color: #eeeeee; border: 1px dashed #999999; color: black; font-family: Andale Mono, Lucida Console, Monaco, fixed, monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"><code style="color: black; word-wrap: normal;">$ sudo docker run -dt -p 9090:9090 -v /tmp/prometheus.yml:/etc/prometheus/prometheus.yml docker.io/prom/prometheus</code></pre>
</div>
<div>
<br /></div>
<div>
The next thing to do is enable the Docker daemon to expose and endpoint that Prometheus can monitor. This is where the Docker docs are off a bit. They suggest that you create a /etc/docker/daemon.json with the following configuration.<br />
<br /></div>
<div>
<pre style="background-color: #eeeeee; border: 1px dashed #999999; color: black; font-family: Andale Mono, Lucida Console, Monaco, fixed, monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"><code style="color: black; word-wrap: normal;">{
"metrics-addr" : "127.0.0.1:9323",
"experimental" : true
}
</code></pre>
<div>
<br />
The issue with that is, Docker doesn't respect that configuration. So, instead, I chose to use the latest version of Docker in Fedora. To do this, I went to the <a href="https://docs.docker.com/engine/installation/linux/fedora/#install-using-the-repository" target="_blank">Docker website for instructions</a>. Following those got me to these versions:<br />
<br />
<pre style="background-color: #eeeeee; border: 1px dashed #999999; color: black; font-family: Andale Mono, Lucida Console, Monaco, fixed, monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"><code style="color: black; word-wrap: normal;">$ dnf list docker-ce.x86_64 --showduplicates | sort -r
Last metadata expiration check: 0:35:28 ago on Thu Jun 1 16:20:35 2017.
Installed Packages
docker-ce.x86_64 17.05.0.ce-1.fc25 docker-ce-edge
docker-ce.x86_64 17.05.0.ce-1.fc25 @docker-ce-edge
docker-ce.x86_64 17.05.0.ce-1.fc25 @docker-ce-edge
docker-ce.x86_64 17.04.0.ce-1.fc25 docker-ce-edge
docker-ce.x86_64 17.03.1.ce-1.fc25 docker-ce-stable
docker-ce.x86_64 17.03.0.ce-1.fc25 docker-ce-stable
Available Packages</code></pre>
</div>
</div>
<div>
<br /></div>
<div>
Now that I had those, I could go back to the previous documentation from Docker to get started with Prometheus. That link is here: <span style="color: #0000ee; text-decoration-line: underline;">https://docs.docker.com/engine/admin/prometheus/</span><span style="color: #0000ee;"> </span>and for the most part it worked. The issue was, I could start the container, but when I tried to hit the Prometheus target of http://localhost:9090/targets, I received the following error:</div>
<div>
<br /></div>
<div>
Get http://localhost:9323/metrics: dial tcp 127.0.0.1:9323: getsockopt: connection refused</div>
<div>
<br />
It has the "Docker" target in a "Down" state. Through a bit more investigation, I was able to change 2 things to enable it:</div>
<div>
<br /></div>
<div>
1. Change the /etc/docker/daemon.json from 127.0.0.1:9323 to 0.0.0.0:9323</div>
<div>
2. Restart the Prometheus image with --network=host</div>
<div>
<br /></div>
<div>
Now I have the target as up, and a graph with Docker attributes that I can start monitoring. Check out the graph below.</div>
<div>
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://4.bp.blogspot.com/-cc3Ea7SbJSE/WTCO2ijiCmI/AAAAAAAAEF8/VplSjC6WiuYX9Idsrd2hJSZDCNf2fMnxwCLcB/s1600/Screenshot%2Bfrom%2B2017-06-01%2B17-01-41.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="850" data-original-width="1600" height="170" src="https://4.bp.blogspot.com/-cc3Ea7SbJSE/WTCO2ijiCmI/AAAAAAAAEF8/VplSjC6WiuYX9Idsrd2hJSZDCNf2fMnxwCLcB/s320/Screenshot%2Bfrom%2B2017-06-01%2B17-01-41.png" width="320" /></a></div>
<div>
<br /></div>
<div>
At this point, I can now start evaluating Prometheus and start learning more about it.</div>
<div>
<br /></div>
<div>
Thanks for hanging in there. If you have any questions or comments, please do engage below.</div>
Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-2177586379122489026.post-82267533274763054692017-02-04T19:31:00.004-06:002017-02-04T19:46:48.686-06:00Testing OpenShift on Openstack using Snapshots<br />
The goal here is to allow me to test out OpenShift Container Platform on top of Red Hat OpenStack Platform. I want to be able to build and tear down the environment quickly so I can check out different configurations. OpenStack provides a way for me to do this via snapshots.<br />
<br />
The first thing I did was upload a RHEL 7 image. Then I booted and configured two servers from that image:<br />
<ul>
<li>Bastion Host</li>
<li>Master-Infra-AppNode</li>
</ul>
<div>
To configure these servers, I followed the <a href="https://access.redhat.com/articles/2743631" target="_blank">Red Hat Reference Architecture Red Hat OpenShift Container Platform 3 on Red Hat OpenStack Platform 8</a> up to page 47, right before deploying OpenShift Container Platform. This allowed me to update the servers, configure the interfaces, sudo access, etc... Here are what my servers look like:</div>
<br />
<pre style="background-color: #eeeeee; border: 1px dashed #999999; color: black; font-family: Andale Mono, Lucida Console, Monaco, fixed, monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"><code style="color: black; word-wrap: normal;">
$ nova list
+--------------------------------------+---------------------------+---------+------------+-------------+------------------------------------------------------------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+---------------------------+---------+------------+-------------+------------------------------------------------------------------------+
| 82a42602-030f-4137-94bb-bac5f275dc1b | bastion-gold | SHUTOFF | - | Shutdown | tenant-network=172.18.20.13; control-network=192.168.x.6, 10.19.x.80 |
| 17a505d0-9252-4a65-a0c8-196f6f25e605 | master-infra-appnode-gold | SHUTOFF | - | Shutdown | tenant-network=172.18.20.4; control-network=192.168.x.5, 10.19.x.53 |
+--------------------------------------+---------------------------+---------+------------+-------------+------------------------------------------------------------------------+
</code></pre>
<div>
<br /></div>
<div>
After the servers were configured, I shut them down and created an image from each of those servers called "bastion-gold" image, and "master-infra-appnode-gold" image. This will allow for me to create my OpenShift Container Platform environment from these images. The steps I followed to create the snapshots are:</div>
<div>
<br />
<pre style="background-color: #eeeeee; border: 1px dashed #999999; color: black; font-family: Andale Mono, Lucida Console, Monaco, fixed, monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"><code style="color: black; word-wrap: normal;"><div>
$ openstack server list</div>
$ nova image-create --poll master-infra-appnode-gold sc-master-0.rhops.eng.x.x.redhat.com-snap
$ nova image-create --poll master-infra-appnode-gold sc-node-0.rhops.eng.x.x.redhat.com-snap
$ nova image-create --poll master-infra-appnode-gold sc-node-1.rhops.eng.x.x.redhat.com-snap
$ nova image-create --poll bastion-gold sc-bastion.rhops.eng.x.x.redhat.com-snap
</code></pre>
<br /></div>
<div>
<a name='more'></a>This gives me a snapshot for my OpenShift Container Platform Master, Application and Bastion hosts. My new list of images looks like this:</div>
<div>
<br /></div>
<div>
<pre style="background-color: #eeeeee; border: 1px dashed #999999; color: black; font-family: Andale Mono, Lucida Console, Monaco, fixed, monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"><code style="color: black; word-wrap: normal;">
$ openstack image list
+--------------------------------------+-------------------------------------------+--------+
| ID | Name | Status |
+--------------------------------------+-------------------------------------------+--------+
| fa45f2e3-14bc-4592-8871-c555ea1f5ced | sc-bastion.rhops.eng.x.x.redhat.com-snap | active |
| 8c14bf10-eba5-4968-bbf3-1f327461f8cd | sc-node-1.rhops.eng.x.x.redhat.com-snap | active |
| 128fa2b3-ac04-4465-b9b0-dcfbfbfb77d7 | sc-node-0.rhops.eng.x.x.redhat.com-snap | active |
| 1a8e5507-665e-4657-a09e-f470fa5a5243 | sc-master-0.rhops.eng.x.x.redhat.com-snap | active |
| 86182aaa-dc67-479c-897a-7d6dc3388bb4 | rhel7 | active |
+--------------------------------------+-------------------------------------------+--------+
</code></pre>
</div>
<div>
<br /></div>
<div>
Now that I have all my snapshots created, I can boot my servers. I am also using cinder volumes for the Docker storage, so I need to create those and attach to the servers. Here's the simple script to create the cinder volumes:</div>
<div>
<br /></div>
<div>
<pre style="background-color: #eeeeee; border: 1px dashed #999999; color: black; font-family: Andale Mono, Lucida Console, Monaco, fixed, monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"><code style="color: black; word-wrap: normal;">
#!/bin/bash
source ./vars.sh
VOLUME_SIZE=${VOLUME_SIZE:-15}
for NODE in $MASTERSAPPS; do
cinder create --name ${NODE}-docker ${VOLUME_SIZE}
done
</code></pre>
</div>
<div>
<br /></div>
<div>
After the cinder volumes are created, I just need to boot the new servers from the snapshots and attach the cinder volumes. Simple script here:</div>
<div>
<br /></div>
<pre style="background-color: #eeeeee; border: 1px dashed #999999; color: black; font-family: Andale Mono, Lucida Console, Monaco, fixed, monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"><code style="color: black; word-wrap: normal;">#!/bin/bash
source ./vars.sh
# Deploy bastion node
nova boot --flavor m1.small --image sc-bastion.rhops.eng.x.x.redhat.com-snap \
--key-name ocp3_rsa \
--nic net-name=control-network --nic net-name=tenant-network \
--user-data=user-data/sc-bastion.yaml \
sc-bastion.${DOMAIN}
# Deploy master node
for HOSTNAME in "sc-master-0"; do
VOLUMEIDMASTER=$(cinder show ${HOSTNAME}.${DOMAIN}-docker | grep ' id ' | awk '{print $4}')
nova boot --flavor m1.large --image sc-master-0.rhops.eng.x.x.redhat.com-snap \
--key-name ocp3_rsa \
--nic net-name=control-network --nic net-name=tenant-network \
--block-device source=volume,dest=volume,device=vdb,id=${VOLUMEIDMASTER} \
--user-data=user-data/${HOSTNAME}.yaml \
sc-master-${HOSTNUM}.${DOMAIN}
done
# Deploy application node 0
for HOSTNAME in "sc-node-0"; do
VOLUMEIDNODE0=$(cinder show ${HOSTNAME}.${DOMAIN}-docker | grep ' id ' | awk '{print $4}')
nova boot --flavor m1.large --image sc-node-0.rhops.eng.x.x.redhat.com-snap \
--key-name ocp3_rsa \
--nic net-name=control-network --nic net-name=tenant-network \
--block-device source=volume,dest=volume,device=vdb,id=${VOLUMEIDNODE0} \
--user-data=user-data/${HOSTNAME}.yaml \
${HOSTNAME}.${DOMAIN}
done
# Deploy application node 1
for HOSTNAME in "sc-node-1"; do
VOLUMEIDNODE1=$(cinder show ${HOSTNAME}.${DOMAIN}-docker | grep ' id ' | awk '{print $4}')
nova boot --flavor m1.large --image sc-node-1.rhops.eng.x.x.redhat.com-snap \
--key-name ocp3_rsa \
--nic net-name=control-network --nic net-name=tenant-network \
--block-device source=volume,dest=volume,device=vdb,id=${VOLUMEIDNODE1} \
--user-data=user-data/${HOSTNAME}.yaml \
${HOSTNAME}.${DOMAIN}
done
</code></pre>
<div>
Now I'll attach my floating IPs to my Bastion and Master nodes:</div>
<div>
<br /></div>
<div>
<pre style="background-color: #eeeeee; border: 1px dashed #999999; color: black; font-family: Andale Mono, Lucida Console, Monaco, fixed, monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"><code style="color: black; word-wrap: normal;">openstack server add floating ip sc-bastion.rhops.eng.x.x.redhat.com 10.19.x.80
openstack server add floating ip sc-master-0.rhops.eng.x.x.redhat.com 10.19.x.53</code></pre>
</div>
<div>
</div>
<br />
<div>
Now, remember, after booting these new servers, you'll need to update DNS with the new IP addresses on your control_network per the reference architecture. You'll also need to adjust your Ansible inventory file to reflect the new IPs. Otherwise, you are now good to go.</div>
<div>
<br /></div>
<div>
To reset my environment, all I have to do is "nova delete" the snapshots, and run the above scripts to get right back to a state where I can re-install OpenShift Container Platform. </div>
<div>
<br /></div>
<div>
Thanks for reading!</div>
<div>
<br /></div>
<div>
P.S. Credit to Mark Lamourine for writing that refarch and letting me "borrow" some of his scripts.</div>
<div>
<br /></div>
Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-2177586379122489026.post-46532439876578202672016-08-31T17:17:00.004-05:002016-09-01T09:48:19.301-05:00OpenShift Cluster Up on Fedora<br />
Looking for a quick way to get an OpenShift Origin instance up and running quickly on your local laptop? Look no further. 'oc cluster up' is here. Check out the documentation <a href="https://github.com/openshift/origin/blob/master/docs/cluster_up_down.md" target="_blank">here</a> which points you <a href="https://github.com/openshift/origin/releases/tag/v1.3.0-alpha.3" target="_blank">here</a> for the actual client bits. Let's get started.<br />
<br />
A quick scan of the environment before running 'oc cluster up' so I know what I'm getting.<br />
<br />
<pre style="background-color: #eeeeee; border: 1px dashed #999999; color: black; font-family: Andale Mono, Lucida Console, Monaco, fixed, monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"><code style="color: black; word-wrap: normal;">$ cat /etc/fedora-release
Fedora release 24 (Twenty Four)
$ docker --version
Docker version 1.10.3, build 1ecb834/1.10.3
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
</code></pre>
<br />
Grab the latest client, untar it, change into the proper directory and get the version.<br />
<br />
<div>
</div>
<div>
<pre style="background-color: #eeeeee; border: 1px dashed #999999; color: black; font-family: Andale Mono, Lucida Console, Monaco, fixed, monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"><code style="color: black; word-wrap: normal;">$ wget https://github.com/openshift/origin/releases/download/v1.3.0-alpha.3/openshift-origin-client-tools-v1.3.0-alpha.3-7998ae4-linux-64bit.tar.gz
$ tar xzvf openshift-origin-client-tools-v1.3.0-alpha.3-7998ae4-linux-64bit.tar.gz
$ cd openshift-origin-client-tools-v1.3.0-alpha.3-7998ae4-linux-64bit/
$ ./oc version
oc v1.3.0-alpha.3
kubernetes v1.3.0+507d3a7
features: Basic-Auth GSSAPI Kerberos SPNEGO
</code></pre>
<br />
<div>
</div>
<div>
Start the cluster.<br />
<br />
<a name='more'></a></div>
<div>
<div>
<pre style="background-color: #eeeeee; border: 1px dashed #999999; color: black; font-family: Andale Mono, Lucida Console, Monaco, fixed, monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"> <code style="color: black; word-wrap: normal;">
$ ./oc cluster up
-- Checking OpenShift client ... OK
-- Checking Docker client ... OK
-- Checking for existing OpenShift container ... OK
-- Checking for openshift/origin:v1.3.0-alpha.3 image ...
Pulling image openshift/origin:v1.3.0-alpha.3
Pulled 0/3 layers, 3% complete
Pulled 1/3 layers, 57% complete
Pulled 2/3 layers, 93% complete
Pulled 3/3 layers, 100% complete
Extracting
Image pull complete
-- Checking Docker daemon configuration ... OK
-- Checking for available ports ...
WARNING: Binding DNS on port 8053 instead of 53, which may be not be resolvable from all clients.
-- Checking type of volume mount ...
Using nsenter mounter for OpenShift volumes
-- Checking Docker version ... OK
-- Creating host directories ... OK
-- Finding server IP ...
Using 192.168.0.102 as the server IP
-- Starting OpenShift container ...
Creating initial OpenShift configuration
Starting OpenShift using container 'origin'
Waiting for API server to start listening
OpenShift server started
-- Installing registry ... OK
-- Installing router ... OK
-- Importing image streams ... OK
-- Importing templates ... OK
-- Login to server ... OK
-- Creating initial project "myproject" ... OK
-- Server Information ...
OpenShift server started.
The server is accessible via web console at:
https://192.168.0.102:8443
You are logged in as:
User: developer
Password: developer
To login as administrator:
oc login -u system:admin
</code></pre>
<br />
Have a look at the environment again.
<br />
<br />
<pre style="background-color: #eeeeee; border: 1px dashed #999999; color: black; font-family: Andale Mono, Lucida Console, Monaco, fixed, monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"> <code style="color: black; word-wrap: normal;">
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
docker.io/openshift/origin-deployer v1.3.0-alpha.3 d4a68c00e564 3 weeks ago 483.5 MB
docker.io/openshift/origin-docker-registry v1.3.0-alpha.3 98e3a96eb8f8 3 weeks ago 348.9 MB
docker.io/openshift/origin-haproxy-router v1.3.0-alpha.3 15baa67d10d5 3 weeks ago 502.7 MB
docker.io/openshift/origin v1.3.0-alpha.3 93fd7655df0d 3 weeks ago 483.5 MB
docker.io/openshift/origin-pod v1.3.0-alpha.3 1b4bb3233091 3 weeks ago 1.591 MB
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c3efa00e9e08 openshift/origin-docker-registry:v1.3.0-alpha.3 "/bin/sh -c 'DOCKER_R" 29 seconds ago Up 26 seconds k8s_registry.493e070d_docker-registry-1-e9ero_default_82f06d76-6fc1-11e6-80ab-3c970ee91ed3_c8d13209
7b3d1ae9364a openshift/origin-haproxy-router:v1.3.0-alpha.3 "/usr/bin/openshift-r" 32 seconds ago Up 28 seconds k8s_router.ffbb3abd_router-1-szsrv_default_82940708-6fc1-11e6-80ab-3c970ee91ed3_1e53f729
54f287d3ac8c openshift/origin-pod:v1.3.0-alpha.3 "/pod" 41 seconds ago Up 39 seconds k8s_POD.e4a40125_docker-registry-1-e9ero_default_82f06d76-6fc1-11e6-80ab-3c970ee91ed3_128e75f1
ca3aa55f8db8 openshift/origin-pod:v1.3.0-alpha.3 "/pod" 42 seconds ago Up 40 seconds k8s_POD.9039df33_router-1-szsrv_default_82940708-6fc1-11e6-80ab-3c970ee91ed3_9992b574
1326f81e48ee openshift/origin:v1.3.0-alpha.3 "/usr/bin/openshift s" About a minute ago Up About a minute origin
</code></pre>
</div>
<div>
</div>
<div>
<br />
We have a new interface.<br />
<br />
<pre style="background-color: #eeeeee; border: 1px dashed #999999; color: black; font-family: Andale Mono, Lucida Console, Monaco, fixed, monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"> <code style="color: black; word-wrap: normal;">
$ ip a s veth1a784c0
27: veth1a784c0@if26: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
link/ether 5e:f6:3a:9d:35:cb brd ff:ff:ff:ff:ff:ff link-netnsid 2
inet6 fe80::5cf6:3aff:fe9d:35cb/64 scope link
valid_lft forever preferred_lft forever
$ brctl show
bridge name<span class="Apple-tab-span" style="white-space: pre;"> </span>bridge id<span class="Apple-tab-span" style="white-space: pre;"> </span>STP enabled<span class="Apple-tab-span" style="white-space: pre;"> </span>interfaces
docker0<span class="Apple-tab-span" style="white-space: pre;"> </span>8000.0242c270a05e<span class="Apple-tab-span" style="white-space: pre;"> </span>no<span class="Apple-tab-span" style="white-space: pre;"> </span>veth1a784c0
virbr0<span class="Apple-tab-span" style="white-space: pre;"> </span>8000.000000000000<span class="Apple-tab-span" style="white-space: pre;"> </span>yes<span class="Apple-tab-span" style="white-space: pre;"> </span>
</code></pre>
<br />
Now just use the links provided at the end of the 'oc cluster up' to access the OpenShift environment.
<br />
<div>
</div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://3.bp.blogspot.com/-eAZW4VUZYgI/V8dMQwW839I/AAAAAAAAD18/0YLr-eP_1mMB62WyAf337EH07BCBCyUEwCLcB/s1600/Screenshot%2Bfrom%2B2016-08-31%2B16-29-24.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="170" src="https://3.bp.blogspot.com/-eAZW4VUZYgI/V8dMQwW839I/AAAAAAAAD18/0YLr-eP_1mMB62WyAf337EH07BCBCyUEwCLcB/s320/Screenshot%2Bfrom%2B2016-08-31%2B16-29-24.png" width="320" /></a></div>
<div>
</div>
<div>
</div>
<div>
</div>
<div>
</div>
<div>
<br />
<br />
Accept the certificate and log in with the credentials provided.<br />
<br /></div>
<div>
</div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://2.bp.blogspot.com/-UJKM4JL_BEE/V8dNGdk8ndI/AAAAAAAAD2A/ZcIndUkWhUkRHKAOXfqmkpwlqgtBL_0jgCLcB/s1600/Screenshot%2Bfrom%2B2016-08-31%2B16-30-34.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="170" src="https://2.bp.blogspot.com/-UJKM4JL_BEE/V8dNGdk8ndI/AAAAAAAAD2A/ZcIndUkWhUkRHKAOXfqmkpwlqgtBL_0jgCLcB/s320/Screenshot%2Bfrom%2B2016-08-31%2B16-30-34.png" width="320" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: left;">
</div>
<div class="separator" style="clear: both; text-align: left;">
<br /></div>
<div class="separator" style="clear: both; text-align: left;">
<br /></div>
<div class="separator" style="clear: both; text-align: left;">
Now you can start to create projects via the GUI.</div>
<div class="separator" style="clear: both; text-align: left;">
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://4.bp.blogspot.com/-1-2SwAqldVI/V8dNHpRk1sI/AAAAAAAAD2E/3xgrIlm63kY1RbBTh5McIBkGtZUyEKtlwCLcB/s1600/Screenshot%2Bfrom%2B2016-08-31%2B16-32-57.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="170" src="https://4.bp.blogspot.com/-1-2SwAqldVI/V8dNHpRk1sI/AAAAAAAAD2E/3xgrIlm63kY1RbBTh5McIBkGtZUyEKtlwCLcB/s320/Screenshot%2Bfrom%2B2016-08-31%2B16-32-57.png" width="320" /></a></div>
<div>
</div>
<div>
</div>
<div>
</div>
<div>
<br />
Log into the CLI as administrator and see what's there.<br />
<br /></div>
<div>
</div>
<div>
<div>
<pre style="background-color: #eeeeee; border: 1px dashed #999999; color: black; font-family: Andale Mono, Lucida Console, Monaco, fixed, monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"><code style="color: black; word-wrap: normal;">
$ oc get nodes
NAME STATUS AGE
192.168.0.102 Ready 8m
$ oc get projects
NAME DISPLAY NAME STATUS
default Active
kube-system Active
myproject My Project Active
oc-cluster-up-test oc-cluster-up-test Active
openshift Active
openshift-infra Active
</code></pre>
<br />
I was able to deploy an ephemeral Jenkins application (in the oc-cluster-up-test project above) to do some further testing. It actually uses xip.io for wildcard DNS. Slick.<br />
<br /></div>
This is nuts. How much easier can this get. BTW, this took around 2 minutes to set up. When I get finished with it, it's a 'oc cluster down' to stop the cluster! I work on OpenShift and I haven't tried this before. Shame on me. Nice work OpenShift team.
</div>
</div>
</div>
</div>
Unknownnoreply@blogger.com1tag:blogger.com,1999:blog-2177586379122489026.post-8491879929103322902016-08-05T07:26:00.001-05:002016-08-05T07:26:51.850-05:00Fedora Flock - 2016 - Day 4 - Last DayDay 3, last day, today only has 2 sessions that I was going to attend. I started out by going to an "Ansible best practices Working Session" by Michael Scherer. The goal was to cover Ansible basics, best practices, and how they apply those to the Fedora Infrastructure. One example he used was checking the checksums on files before you replace them and restart services with Ansible. In particular, ssh config files. You can imagine if you restart ssh on your clusters across datacenters and you break ssh... no more Ansible. Another best practice is to leverage the pkg module which can determine which package manager is being used by the host and adjust accordingly. The third best practice was to be careful on how you assign variables. Try to use local when possible.The Fedora Infrastructure team keeps thier ansible code <a href="https://infrastructure.fedoraproject.org/cgit/ansible.git" target="_blank">here</a> Michael spent quite a bit of time covering the organization of their playbooks, roles, groups, tasks, handlers. A couple of tools that might be helpful:<br />
<ul>
<li><a href="https://github.com/willthames/ansible-lint" target="_blank">Ansible Lint</a></li>
<li><a href="https://github.com/willthames/ansible-review" target="_blank">Ansible Review</a></li>
</ul>
Michael also walked through a playbook in detail with the upstream maintainer of the mailman package. There were quite a few best practices thrown around for that one.<br />
<br />
After the session Michael and I had a chance to sit down and talk about the OpenShift Origin deployment for the cloud working group. Some decisions need to be made:<br />
<ul>
<li>Deploy Origin containers on Fedora Atomic or Fedora 24</li>
<li>Set some expectations that this may be redeployed while we are learning?</li>
<li>Bare metal or OpenStack?</li>
<li>Storage for registry? Swift (not in Fedora?) NFS (hope not)</li>
<li>Architecture: What do we want to deploy? Doesn't have to be production quality.</li>
</ul>
<div>
So that's the great thing about coming to a conference like this. Get a chance to put some faces to names and talk about fun, important projects. Flock is now over and my overall impression is that this conference was run very well. Lots of activities, food was great, people were great, sessions were great. I'm looking forward to my next Flock.</div>
<br />
<br />Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-2177586379122489026.post-8369702581681783752016-08-04T11:53:00.001-05:002016-08-04T12:01:36.847-05:00Fedora Flock - 2016 - Day 3Third day! Before I get started on my session logging today, check out the picture of all the attendees at flock that we took last night before the cruise of Krakow.<br />
<br />
<blockquote class="twitter-tweet" data-lang="en">
<div dir="ltr" lang="en">
The Flock 2016 group photo. <a href="https://twitter.com/hashtag/FlocktoFedora?src=hash">#FlocktoFedora</a> <a href="https://t.co/ZlAfGXcRhW">pic.twitter.com/ZlAfGXcRhW</a></div>
— Fedora Project (@fedora) <a href="https://twitter.com/fedora/status/760940461091586048">August 3, 2016</a></blockquote>
<script async="" charset="utf-8" src="//platform.twitter.com/widgets.js"></script>
<br />
Today we started with lightening talks for an hour. I was <a href="https://fedoraproject.org/wiki/Flock/Lightning_Talks_2016#Lightning_Talks_2016_-_4_August_2016_.40_09:00_Local_Time_in_Krakow.2C_Poland" target="_blank">second up</a> and presented <a class="g-profile" href="https://plus.google.com/108052331678796731786" target="_blank">+OpenShift</a> on <a class="g-profile" href="https://plus.google.com/112917221531140868607" target="_blank">+Fedora Project</a>. That was my first time presenting a lightening talk and my first time attending other lightening talks. I really like the format for both. You'd be surprised at how much material you can cover in 5 minutes.<br />
<br />
Today is also hack session day. The sessions I am attending are "<a href="https://flock2016.sched.org/event/f507de787262c2bc878ac5bf064d70fc" target="_blank">Building a Fedora Containers Library</a>", "<a href="https://flock2016.sched.org/event/9b852803e6bc0f9287143b44dd446dd1" target="_blank">OpenShift on Fedora</a>", and "<a href="https://flock2016.sched.org/event/885db6dea7b5538bdbe898a33eb390f8" target="_blank">Fedora PRD Workshop</a>". The sessions are two hours each.<br />
<br />
<a class="g-profile" href="https://plus.google.com/116763716624645578717" target="_blank">+Josh Berkus</a> kicked off "Building a Fedora Containers Library" with a slide that had instructions to git clone the lab material. That's the proper way to start a workshop :). Josh walked us through building a <a class="g-profile" href="https://plus.google.com/114169631260847972794" target="_blank">+PostgreSQL</a> image step by step with lots of best practices discussed along the way. This session was particularly insightful because Josh is well... extremely knowledable on PostgreSQL. That knowledge coupled with his Docker chops translated into an outstanding session. Great hack session.<br />
<br />
Next was "OpenShift on Fedora", a hack session led by <a href="https://github.com/soltysh" target="_blank">Maciej Szulik</a>. The material for the lab is located <a href="https://github.com/soltysh/talks/blob/master/2016/flock/scenario.md" target="_blank">here</a>. We started out by leveraging vagrant to spin up an environment that we could issue an "oc cluster up" in, which spins up everything you need to get started. The lab consisted of deploying pods, exploring pods, services, replication controllers, etc. Maciej did a great job explaining some concepts in OpenShift that I wasn't really getting. Such as deployment configs, image streams, horizontal scaling. I didn't quite finish with the lab but the good thing is you can take it with you in the Vagrant box and the material on github. Great hack session.<br />
<br />
<a name='more'></a><br />
Final session of the day was the PRD session. This is where different members of the Fedora workgroups came together to discuss strategy and how to move forward. Very interesting session to see how the sausage is made. We got through the Server and Workstation groups and then transitioned into Cloud / Atomic. Unfortunately we didn't have to much time to get into the details - spent time focusing on vision and mission statement. I'll record the tactical things I'd like to see done here that I think we can actually accomplish:<br />
<br />
<ul>
<li>Drive Fedora Atomic into cloud providers: GCE, Azure, AWS</li>
<ul>
<li>Think marketplace quickstarts here</li>
<li>Think ansible playbook driven deploys here</li>
</ul>
<li>Two options for Cloud providers</li>
<ul>
<li>All in one for developers / ops to kick the tires on</li>
<li>Production quality one for ops to kick the tires on </li>
</ul>
<li>Fedora based CDK</li>
<li>Perhaps start work on documentation / demos for ^</li>
</ul>
<div>
This was another useful session. I'd like to see the Cloud workgroup dedicate some time to ironing out the rest of the PRD.</div>
<div>
<br /></div>
<div>
Today ends with a stint a a local brewery. Another great day of Flock in the books.</div>
Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-2177586379122489026.post-74345503119753810912016-08-03T11:11:00.001-05:002016-08-03T17:41:44.414-05:00Fedora Flock - 2016 - Day 2Day 2 starts soon. Again, this will be high level notes from each session that I attend. I'm quite sure that I won't capture everything. Head here for my <a href="http://www.colliernotes.com/2016/08/fedora-flock-2016-day-1.html" target="_blank">Day 1 notes</a>.<br />
<br />
Today started out with "Continuous Integration and the Glorious Future". Tim kicked it off with some CI history - dev, dev, dev, then integration. That didn't work to well. Provided some nice perspective that I hadn't had before. Tim also provided a current state of the union with Fedora automation and items that are in progress including build automations, build self-tests, and automated deployments. Some of the items that need work are presentation of data and results, keeping the builds fast. More great perspective on the feedback loop and what he wants out of it: how long after package is updated can a new compose be generated, how long after compose is built until the tests are run. How long after the tests are run untl the developer is notified of success or failure. The QA team is also evaluating how to enable contributors to write thier own automated tests. Nonstop Fedora. Tim covered quite a bit more on the Why and How during his presentation. Great presentation.<br />
<br />
Next up was "Modularity: Why, where we are, and how to get involved" by <a class="g-profile" href="https://plus.google.com/104114458047447044962" target="_blank">+Langdon White</a>. Langdon kicked off by covering some history which dated back to the "Rings Proposal". starting from "JeOS" which would be highly curated to the outer rings which are no so curated. He provided some great analogies about how a one size doesn't fit all - comparing to the lifecycle of packages and how they don't align with other packages. Then he moved into modules:<br />
<br />
<ul>
<li>A module is a thing that's managed as a logical unit.</li>
<li>A module is a thing that promises an external, unchanging API</li>
<li>A module is a thing that may have many, unexposed binary artifacts to support the external API</li>
<li>A module may "contain" other modules and is referred to as a "module stack"</li>
</ul>
<br />
The process: inputs -> activities -> outputs -> outcomes -> impact.<br />
<br />
We saw an example of a module input file which explained references, profiles, components, and filters.<br />
<br />
Progress thus far is an established Modularity WG, implemented a dnf plugin, implemented an alpha version of module build pipeline, ability to coalesce modules for testing, and kicked off a base-runtime.<br />
<a name='more'></a>Langdon then did a successful demos of:<br />
<br />
<ul>
<li>Searching for kernel "Fedora modules" and installing that module. </li>
<li>Web server demo which really focused on how profiles are used.</li>
<li>LAMP stack demo which showcased deploying php 5.6 and then using modules to move to </li>
</ul>
<br />
My takeaway from this is that it's promising, new, raw. I would encourage people who are interested to join the weekly modularity group meetings to keep up to speed on this fast developing tech. There was a ton of items I couldn't capture here because I was busy listening...<br />
<br />
After Langdons talk, I attended the "<a href="https://github.com/projectatomic/nulecule" target="_blank">Nulecule</a> - Packaging multi container applications". Ratnadeep talked about the issues of "legacy" container creation / configuration / distirbutoin and how Nulecule specification helps solve this. Ratnadeep walktd through a gitlab example which has many distributed services that are needed to stand it up. Gitlab should be decomposed and placed into multiple containers per service. The solution that the Nulecule specification provides is the distribution of metadata that can describe this decomposed service and make available to multiple backends. The implementation of the Nulecule specification is <a href="https://github.com/projectatomic/atomicapp" target="_blank">Atomic App</a>. Application images that Atomic App generates are artifacts of the input / answer files that you pass to Atomic App. Aside from Docker, you don't need anything else on your host in order to get started with Nulecule / Atomic App. Ratnadeep closed out with live demos of: wordpress on Docker, then wordpress on Marathon, and wordpress on <a class="g-profile" href="https://plus.google.com/116512812300813784482" target="_blank">+Kubernetes</a>. Finally, Ratnadeep demo'd a new feature of Atomic App called "index". He showed how to query the existing index on github and also how to generate your own local index. This was a great presentation.<br />
<br />
Then Patrick Uiterwijk presented "Using Fedora Atomic as Workstation". He kicked off by showing his workstation is and has been running Atomic since January. Some of the limitations are that there are no workstation trees and adding packages can be tricky. <br />
<br />
Patrick got started by creating a custom tree, deploying that tree and provisioning it. There are decisions behind each step:What's the initial package set look like? OS version? Delivery mechanism? Where does the compose machine live? It's really cool to sit back and see what clever guys like Patrick are doing and the problems they are solving. This was initially a pet project of his to do nothing more than satisify his curiosity - now he's presenting the work at Flock! <br />
<br />
"Testing Bleeding edge Kernels" by Paul Moore. Paul started the talk by discussing the the kernel development life cycle. He also did great job keeping the presentation to the process and sharing his direct experiences that he's had. You can apply the same concepts that Paul mentioned for kernel development to any software project. I found it particularly interesting to see how he solved some of his challenges for simplified offline capable CI. Which of course is all on github: https://github.com/pcmoore/copr-pkg_scripts. Very insightful presentation.<br />
<br />
Last session of the day was "Continuous security management via OpenSCAP Daemon" by Jan Cerny. I'm interested in this one as it's an extremely important topic that affects the entire lifecycle of an image. Jan kicked off by discussing what makes systems secure, vulnerabilitiy assessments, and known / unknown vulnerabilities. Security compliance and guidelines vary from organization to organization. Some common weaknesses include enabling telnet, ftp, disabling SELinux, open firewall. The thing is, there are several things to check - and it's not reasonable to perform these tasks manually. That's where OpenSCAP comes in.<br />
<br />
SCAP = Security Content Automation Protocol<br />
<br />
OpenSCAP can translate common security guidelines like digs, etc... into OpenSCAP documents. OpenSCAP also has a <a href="https://github.com/OpenSCAP/scap-security-guide" target="_blank">security guide</a> that provides guidance. <br />
<br />
There are a few different ways to run OpenSCAP. Jan recommends using oscap-workbench for noobs like me. So, I installed it during his talk. It's easy to use. Writes out a log file in HTML format that you can review when it's finished. I had a couple of failures :( Will have to follow up on that. By default it ran 74 rules. Looks comprehensive to me. The way I ran it was manual. They now have OpenSCAP DAEMON which can do continuous security management. Other capabilities, offline scanning of VMs, remote machines over SSH, <a class="g-profile" href="https://plus.google.com/100381662757235514581" target="_blank">+Docker</a> images and containers, and local scans. Jan also demoed creating a task with oscapd-cli in interactive mode. <br />
<br />
Jan then demo'd running a scan on a container by using "atomic scan". This kicked off a OpenSCAP container and scanned the container ID that was passed to it. OpenSCAN leverages the offline scanning capability and can detect the OS that's in the container. You can also scan Docker images. Another great presentation.<br />
<br />
Closing out the day by taking a cruise here in Krakow. They keep us busy....<br />
<br />
<br />
<br />
<br />
<br />Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-2177586379122489026.post-2412964673825938732016-08-02T10:01:00.001-05:002016-08-02T10:15:07.793-05:00Fedora Flock - 2016 - Day 1So this is my first <a class="g-profile" href="https://plus.google.com/112917221531140868607" target="_blank">+Fedora Project</a> <a href="https://flocktofedora.org/" target="_blank">Flock</a> conference. I arrived in Krakow yesterday from Austin Texas. The folks who put Flock together did a great job with this event. I have never been to Krakow before, and they clearly communicated how you get around, which buses / trains to take, how to buy tickets, everything. Kudos to that team. I had a few reasons to come to Flock, I wanted to put some faces to names that I have been working with over the years. I wanted to meet with the members of the Fedora Cloud group that I have been participating in, and I wanted to attend technical sessions and see what's coming up in the distro.<br />
<br />
My schedule is listed <a href="https://flock2016.sched.org/#.V6BkhhEaOJM.email" target="_blank">here</a>. I'll blog each day that I'm here to share the experience. Hopefully you will find it interesting enough to attend the next one if you ddn't get a chance to come to this Flock event. I'll give an overview of each session that I attend. I know I won't capture all the details from each session that I attend, but it's a taste. The sessions are recoreded and will be posted to the Fedora youtube channel.<br />
<br />
<b>Day 1. </b><br />
<br />
Introduction from Joe B. to thank sponsors: Red Hat, Unix Stickers, SuSE, The Linux Foundation, stickermule. Thanks sponsors! Keep in mind though, Flock is a confernece that is run and led by contributors - for contributors. I can tell there was a ton of work done behind the scenes to make this event happen.<br />
<br />
Then the keynote by <a class="g-profile" href="https://plus.google.com/103763147500173410163" target="_blank">+Matthew Miller</a>. Matt covered some of the numbers that show Fedora is gaining steam in the cloud and developer space, among many others. He also talked about a few of the major goals for 2016. It's cool to see that the <a class="g-profile" href="https://plus.google.com/112917221531140868607" target="_blank">+Fedora Project</a> has some big plans to continue moving forward in the cloud space. Think items like Fedora Atomic, OpenShift and Flatpak.<br />
<br />
<a name='more'></a><br />
My first session was Fedora with "Amazon EC2 Container Service" presented by <a class="g-profile" href="https://plus.google.com/113013533075549646416" target="_blank">+David Duncan</a>. He did a great job showing how you can leverage <a class="g-profile" href="https://plus.google.com/108727025270662383247" target="_blank">+Project Atomic</a> Fedora Atomic hosts to run containers on AWS with. Big takeaways are that <a class="g-profile" href="https://plus.google.com/110356773655474889799" target="_blank">+Amazon.com</a> has containerized the ecs agent which is the gateway to integration. There are different ways to configure a Fedora Atomic host to use the ecs agent. We need to follow up on this to make sure we optimize the process and make it easier for users to start running containers on a Fedora Atomic host on AWS. David mentioned many things during his presentation - and as an ops guy, some of them were very interesting. Particularly the autoscaling, storage attachment, scheduling and integration with Fedora Atomic hosts capabilities. Great session.<br />
<br />
Second session was <a class="g-profile" href="https://plus.google.com/107050082783087737978" target="_blank">+Thomas Cameron</a> presenting an "Introduction to Container Security". Thomas set the context by talking about how Red Hat got involved with containers. Then Thomas moved into the meat of the presentation talking about everything from kernel namespaces to SELinux, cgroups, tips and tricks, etc... Takeaways, image provenance matters, don't just download any image from anywhwere and expect good things. Keep SELinux enabled. Production containers matter, run them in a supported fashion on production supported hosts. Don't run with root priviledges. Image and container lifecycle matters - come up with a way to manage your container ecosystem. Great session.<br />
<br />
The third session was "Containers in Production" was presented by <a class="g-profile" href="https://plus.google.com/113704149836818521978" target="_blank">+Daniel Walsh</a>. Need I say more? Dan discussed COW filesystems and some of the optinos here: DeviceMapper, BTRFS, etc... Also showed a cool demo of a Docker registry that could share the images via NFS or other shared filesystems. So you could access your container content without having to do a "docker pull". That's pretty cool. Next up was "System Containers". Dan talked about lack of container priority, how systemd handles this and how it's a natural fit. Hence: system containers. Keep your eye out for <a href="https://github.com/projectatomic/skopeo" target="_blank">skopeo</a> which is a container management tool. Simple signing came up next. Think signing for rpm. It's all about image provenance. The signature is separated from the image, you can cryptographically prove that an image was signed by X person / company. Dan then discussed OCID (standards based alternative to Docker and RKT runtimes) - Componented needed by OpenShift to run the <a class="g-profile" href="https://plus.google.com/116512812300813784482" target="_blank">+Kubernetes</a> workflow. It leverages skopeo which provides image transport, atomic mount for storing images, <a href="https://www.opencontainers.org/" target="_blank">OCI runtime</a> (think runc), and container managmenet API via OCID which is the "Open Container Initiatiive Daemon". Great session.<br />
<br />
"Application Containers and System Services" by <a class="g-profile" href="https://plus.google.com/111517595388403396044" target="_blank">+Honza Horak</a> was up next. Honza told a story about getting started with $SUBJECT by discussing and walking through container basics, PostgreSQL containers, system containers, and more. During the container basics section Honza started at the beginning and showed building images from the inception of a Dockerfile which included best practices such as cleaning layers of the image that make "dnf install" calls. He also mentioned squashing images - creating an image with one commit (layer) using the docker-squash (I hadn't seen this) package. Honza then discussed proper ways to build the PostgreSQL image as well as Python based images. Honza also discussed s2i (Source to Image) which allows you to build containers from source code. Next he discussed "System Containers" which is a phrase he is using to describe containers running on a system that are managed by systemd which can be accomplished via the "atomic" command. Honza concluded with a discussion of "Tools Containers" which are used to provide a mechanism to work with other images (think mongodb and mongodb tools), the "atomic" command for working with SPC containers, flatpak and building infrastructure. Great session.<br />
<br />
A couple of sessions that I didn't get to attend but wanted to: "Copr: What's New?" and "Getting new things into Fedora". Plenty of great sessions to see at Flock. I'm looking forward to day 2. Oh, and I signed up to do a lightening talk on how to provision OpenShift on Fedora on AWS, if you have 5 minutes, stop by. <a href="https://fedoraproject.org/wiki/Flock/Lightning_Talks_2016#Lightning_Talks_2016_-_4_August_2016_.40_09:00_Local_Time_in_Krakow.2C_Poland" target="_blank">Lightening Talk</a>. Tonight ends with a organized walking tour of Krakow. Busy day, informative day.Unknownnoreply@blogger.com1tag:blogger.com,1999:blog-2177586379122489026.post-26733687414459067822016-05-27T21:30:00.003-05:002016-05-27T21:45:13.701-05:00OpenShift Origin on Fedora 24 on AWS - Wow.So, this all started because I was just doing a little Friday tinkering and wanted to see how easy it is to get OpenShift Origin installed on Fedora... on AWS. Well, it turns out, it's really, really easy. So easy, in fact, that I decided to write it down here and share it with you. This will be the first of a few blog posts about running OpenShift Origin on Fedora. This post details how to get OpenShift Origin running on a single instance of Fedora 24. This is also a manual configuration. In future blog posts, I'll talk about how to set up a highly available OpenShift Origin install on Fedora. In addition, I'll talk about how to consume AWS resources like ELBs, IAM, S3, route53, ec2 instances, etc... Just maybe, I'll go into how to automate the deployments with the AWS CLI. Feel free to leave some comments on just how far you want to go here. I promise, it will be fun.<br />
<br />
I learned quite a bit during this process, namely:<br />
<br />
<ul>
<li>You can easily find and use Fedora images in the AWS community AMIs.</li>
<li>OpenShift Origin has been packaged for Fedora 24 - who doesn't like new?</li>
<li>It's easy to install the OpenShift Origin PaaS and get started.</li>
</ul>
The goal was to get Origin running on AWS, launch an application, and hit that app from my browser. There's no real pre-requisites to get started here other than an AWS account with the proper permissions. I do happen to have a DNS name managed by AWS route53 which helps a bit. I also have some prior knowledge of how AWS works.<br />
<br />
Let's chat a bit about what I'm using, what I had set up before this, and what I had to do to meet my goal. I am using:<br />
<br />
<ul>
<li>Fedora AMI with the ID of ami-0a09e667 (Fedora-Cloud-Base-24-20160512.n.0.x86_64-us-east-1-HVM-standard-0). </li>
<li>For my testing, I'm using a m4.2xlarge instance of that AMI.</li>
<li>I had an existing VPC that I launched the Fedora 24 instance into. The only things to know about that is that I have DNS hostnames enabled on that VPC.</li>
<li>I have an existing subnet in that VPC that I launched this into.</li>
<li>I have an existing route table in that VPC with an internet gateway defined so my instance can get out.</li>
<li>I created a new security group on instance launch for testing this.</li>
</ul>
<div>
<br /></div>
<div>
I do need to prep AWS a bit before moving on. I'll use the AWS CLI to do this. I do have an <a href="http://www.colliernotes.com/2016/05/amazon-web-services-command-line.html" target="_blank">AWS CLI cheat sheet</a> that may help if you have questions about querying resources, launching resources, describing, etc.. Have a look. To move forward, I need to know what OpenShift Origin needs. I found that the <a href="https://docs.openshift.org/latest/welcome/index.html" target="_blank">OpenShift Origin documentation</a> is great. Please have a look if you have any questions. That's what I did. I went to the docs | installing | prerequisites and started there. I'll just walk through the prerequisites here and share what I did.<br />
<br />
<a name='more'></a><br />
For DNS, I met the requirements by creating a hosted zone in route53. Then I created a record in route53 for my host which pointed to the public IP associated to it. I also created a wildcard DNS record for the applications running on my host. See below to create a zone, and the records that I need.<br />
<br />
For the ports, since this is a single node install, I just created a security group and opened up inbound 443, 22 and 80.<br />
<br />
For persistent storage, I just launched an instance with a large root device and I added another device of 50G for Docker storage. In fact, let's get to that part, so we can continue with the install.<br />
<br />
We have to do is launch a Fedora 24 instance. Here's my command to launch the instance:</div>
<div>
<br /></div>
<div>
<pre style="background-color: #eeeeee; border: 1px dashed #999999; color: black; font-family: Andale Mono, Lucida Console, Monaco, fixed, monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"><code style="color: black; word-wrap: normal;">aws ec2 run-instances --image-id ami-0a09e667 --instance-type m4.2xlarge --subnet-id subnet-de0axxx --security-group-ids sg-9b7xxxx --block-device-mappings file://master/fedora-ebs-config.json --key-name scollier-test --iam-instance-profile Name=scollier-ebs-profile</code></pre>
</div>
<div>
<br /></div>
<div>
Where the contents of the block device mappings file are:<br />
<br /></div>
<pre style="background-color: #eeeeee; border: 1px dashed #999999; color: black; font-family: Andale Mono, Lucida Console, Monaco, fixed, monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"><code style="color: black; word-wrap: normal;">
[
{
"DeviceName": "/dev/xvdb",
"Ebs": {
"DeleteOnTermination": true,
"VolumeType": "gp2",
"VolumeSize": 50
}
}
]
</code></pre>
<div>
<br /></div>
<div>
So, then the instance is launched. The next thing I want to do is log into that instance and start configuring. That's where the fun part starts.</div>
<div>
<br /></div>
<div>
After the instance initializes, you can connect to it. In my case, I use:<br />
<br /></div>
<div>
<pre style="background-color: #eeeeee; border: 1px dashed #999999; color: black; font-family: Andale Mono, Lucida Console, Monaco, fixed, monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"><code style="color: black; word-wrap: normal;">ssh -i scollier-test.pem fedora@fedora-origin.fedora.sysdeseng.com</code></pre>
</div>
<div>
<br /></div>
<div>
Now I'm in! I can start my work. I want to go ahead and update the instance, reboot, and install few packages per the prereqs (and a few more for Fedora 24).</div>
<div>
<br /></div>
<div>
<pre style="background-color: #eeeeee; border: 1px dashed #999999; color: black; font-family: Andale Mono, Lucida Console, Monaco, fixed, monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"><code style="color: black; word-wrap: normal;">dnf -y update && </code><span style="background-color: transparent;">dnf -y install wget git net-tools bind-utils iptables-services bridge-utils bash-completion ansible python-dnf dbus-python python3-dbus libsemanage-python3 libsemanage-python</span></pre>
</div>
<div>
<br /></div>
<div>
I continued with the install, installed Docker. I did skip setting up proper storage for Docker, I'm just poking around for now. Once I finished with the prereqs, I had to pick my install option. I chose advanced so I could see exactly what was going on under the hood. <br />
<br />
At this point, I'll hop off the AWS instance and go back to my local Fedora desktop to install. I cloned the <a href="https://github.com/openshift/openshift-ansible" target="_blank">openshift-ansible git repo</a> and created the following ansible hosts file:<br />
<br />
<pre style="background-color: #eeeeee; border: 1px dashed rgb(153, 153, 153); overflow: auto; padding: 5px; width: 100%;"><code style="word-wrap: normal;"><span style="font-family: "andale mono" , "lucida console" , "monaco" , "fixed" , monospace;"><span style="font-size: 12px; line-height: 14px;">git clone https://github.com/openshift/openshift-ansible.git
</span></span></code></pre>
<pre style="background-color: #eeeeee; border: 1px dashed rgb(153, 153, 153); overflow: auto; padding: 5px; width: 100%;"><code style="word-wrap: normal;"><span style="font-family: "andale mono" , "lucida console" , "monaco" , "fixed" , monospace;"><span style="font-size: 12px; line-height: 14px;">cat /etc/ansible/hosts
</span></span><span style="font-size: 12px; line-height: 14px;"># Create an OSEv3 group that contains the masters and nodes groups
[OSEv3:children]
masters
nodes
# Set variables common for all OSEv3 hosts
[OSEv3:vars]
# SSH user, this user should allow ssh based auth without requiring a password
ansible_ssh_user=fedora
# If ansible_ssh_user is not root, ansible_sudo must be set to true
ansible_sudo=true
deployment_type=origin
# uncomment the following to enable htpasswd authentication; defaults to DenyAllPasswordIdentityProvider
#openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': '/etc/origin/master/htpasswd'}]
# host group for masters
[masters]
fedora-origin.fedora.sysdeseng.com
# host group for nodes, includes region info
[nodes]
fedora-origin.fedora.sysdeseng.com openshift_node_labels="{'region': 'infra', 'zone': 'default'}"
fedora-origin.fedora.sysdeseng.com openshift_node_labels="{'region': 'primary', 'zone': 'east'}"
</span><span style="font-family: "andale mono" , "lucida console" , "monaco" , "fixed" , monospace;"><span style="font-size: 12px; line-height: 14px;">
</span></span></code></pre>
<br />
The important changes to note in the Ansible inventory file are that I enabled the SSH user, and I enabled ansible_sudo=true. This allows me to connect to the instance via the default fedora user and complete the install. I also have the following in my ~/.ssh/config file:
<br />
<br />
<pre style="background-color: #eeeeee; border: 1px dashed #999999; color: black; font-family: Andale Mono, Lucida Console, Monaco, fixed, monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"><code style="color: black; word-wrap: normal;">Host fedora-origin.fedora.sysdeseng.com.</code><span style="background-color: transparent;"> </span></pre>
<pre style="background-color: #eeeeee; border: 1px dashed #999999; color: black; font-family: Andale Mono, Lucida Console, Monaco, fixed, monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"><span style="background-color: transparent;">Hostname fedora-origin.fedora.sysdeseng.com.
StrictHostKeyChecking no
ProxyCommand none
CheckHostIP no
ForwardAgent yes
IdentityFile /home/scollier/x/x/x/scollier-test.pem</span></pre>
</div>
<br />
<div>
After all that is set up, I can run the playbook:<br />
<br />
<pre style="background-color: #eeeeee; border: 1px dashed #999999; color: black; font-family: Andale Mono, Lucida Console, Monaco, fixed, monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"><code style="color: black; word-wrap: normal;">ansible-playbook /home/x/x/openshift-ansible/playbooks/byo/config.yml</code></pre>
</div>
<div>
<br />
Now, we sit back and wait while the install completes. After the install is complete, I need to deploy a router.<br />
<br />
<pre style="background-color: #eeeeee; border: 1px dashed #999999; color: black; font-family: Andale Mono, Lucida Console, Monaco, fixed, monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"><code style="color: black; word-wrap: normal;">CA=/etc/origin/master
oadm ca create-server-cert --signer-cert=$CA/ca.crt --signer-key=$CA/ca.key --signer-serial=$CA/ca.serial.txt --hostnames='*.apps.fedora.sysdeseng.com' --cert=cloudapps.crt --key=cloudapps.key
cat cloudapps.crt cloudapps.key $CA/ca.crt > cloudapps.router.pem
oadm router router --replicas=1 --default-cert=cloudapps.router.pem --credentials='/etc/origin/master/openshift-router.kubeconfig' --service-account=router
oc get pods
</code></pre>
<div>
<br /></div>
<div>
I also need to mark the node schedulable.</div>
<div>
<br /></div>
<div>
<pre style="background-color: #eeeeee; border: 1px dashed #999999; color: black; font-family: Andale Mono, Lucida Console, Monaco, fixed, monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"><code style="color: black; word-wrap: normal;">oadm manage-node ip-10-30-1-231.ec2.internal --schedulable</code></pre>
</div>
</div>
<div>
<br /></div>
<div>
Now I can create an application. So I access the OpenShift management console.</div>
<div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://1.bp.blogspot.com/-4fHYDBfYjgA/V0j0WmAWtAI/AAAAAAAADxY/ASTe2YKfs90Nx28kNIdTNHrQsewWhxacQCLcB/s1600/Screenshot%2Bfrom%2B2016-05-27%2B20%253A27%253A45.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="180" src="https://1.bp.blogspot.com/-4fHYDBfYjgA/V0j0WmAWtAI/AAAAAAAADxY/ASTe2YKfs90Nx28kNIdTNHrQsewWhxacQCLcB/s320/Screenshot%2Bfrom%2B2016-05-27%2B20%253A27%253A45.png" width="320" /></a></div>
<br />
I can log in with anyone by default. So I do that, then I can create a new project. I decide to just launch an ephemeral Jenkins app.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://1.bp.blogspot.com/-HtLoG2mTqAM/V0j0ltkDN_I/AAAAAAAADxc/6wVMaN0uU2QeVgT2KH5-_FQbza_PG0B5gCLcB/s1600/Screenshot%2Bfrom%2B2016-05-27%2B20%253A29%253A49.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="180" src="https://1.bp.blogspot.com/-HtLoG2mTqAM/V0j0ltkDN_I/AAAAAAAADxc/6wVMaN0uU2QeVgT2KH5-_FQbza_PG0B5gCLcB/s320/Screenshot%2Bfrom%2B2016-05-27%2B20%253A29%253A49.png" width="320" /></a></div>
<br />
<br />
<br /></div>
<div>
<br /></div>
<div>
After launching the app, I expose the app via a service.<br />
<br />
<pre style="background-color: #eeeeee; border: 1px dashed #999999; color: black; font-family: Andale Mono, Lucida Console, Monaco, fixed, monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"><code style="color: black; word-wrap: normal;">oc expose svc/jenkins --hostname=scollier-jenkins.apps.fedora.sysdeseng.com
</code></pre>
<br />
I take the defaults and let it build. Then I test hitting the interface.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://4.bp.blogspot.com/-GWy6Wt7Dp80/V0j4UXLg67I/AAAAAAAADx0/7sMf3pBaSoIZxUlE5mBq17teo1lG3JSigCLcB/s1600/Screenshot%2Bfrom%2B2016-05-27%2B20%253A45%253A42.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="180" src="https://4.bp.blogspot.com/-GWy6Wt7Dp80/V0j4UXLg67I/AAAAAAAADx0/7sMf3pBaSoIZxUlE5mBq17teo1lG3JSigCLcB/s320/Screenshot%2Bfrom%2B2016-05-27%2B20%253A45%253A42.png" width="320" /></a></div>
<br /></div>
<div>
<br /></div>
<div>
<br /></div>
<div>
Now, I click on the URL provided and log into my new Jenkins app.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://4.bp.blogspot.com/-kV9-SDDH9kc/V0j49Q39PyI/AAAAAAAADx8/3U_9jz5F6kwYuaLfRnLDZieLduh9yMvjQCLcB/s1600/Screenshot%2Bfrom%2B2016-05-27%2B20%253A48%253A27.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="180" src="https://4.bp.blogspot.com/-kV9-SDDH9kc/V0j49Q39PyI/AAAAAAAADx8/3U_9jz5F6kwYuaLfRnLDZieLduh9yMvjQCLcB/s320/Screenshot%2Bfrom%2B2016-05-27%2B20%253A48%253A27.png" width="320" /></a></div>
<br /></div>
<div>
<br /></div>
<div>
<br /></div>
<div>
<br /></div>
<div>
That was easy. Now I can continue evaluating OpenShift Origin. Fun eh? As mentioned before, I'll be diving a bit deeper in follow up posts. Stay tuned.</div>
Unknownnoreply@blogger.com1tag:blogger.com,1999:blog-2177586379122489026.post-87969712871579452912016-05-26T21:36:00.001-05:002016-05-26T21:44:48.368-05:00Testing out AWS ssm<span style="font-family: inherit;">I was poking around the AWS CLI and testing out different features / functionality. Amazons ssm caught my eye. I decided to have a look at the remote functionality offered by this tool. I'm consolidating all the notes I found in different resources here, to do a simple test. Here's a high level overview of what it took me to get this configured and working properly:</span><br />
<span style="font-family: inherit;"><br /></span>
<span style="background-color: white; font-family: inherit; line-height: 18px;">1. Create a role and policy and assign that to an EC2 instance at launch time. You can't assign it to a running instance. The policy I assigned to the role that I attached to the instance is called: AmazonEC2RoleforSSM</span><br />
<span style="font-family: inherit;"><br style="background-color: white; line-height: 18px;" /></span>
<span style="background-color: white; font-family: inherit; line-height: 18px;">2. Assign permissions to the user that will be executing the commands. The name of the policy is: AmazonSSMFullAccess</span><br />
<span style="background-color: white; font-family: inherit; line-height: 18px;"><br /></span>
<span style="background-color: white; font-family: inherit; line-height: 18px;">Of course, for your environment, make sure you adhere to your security requirements. There are better ways to restrict this.</span><br />
<span style="font-family: inherit;"><br style="background-color: white; line-height: 18px;" /></span>
<span style="background-color: white; font-family: inherit; line-height: 18px;">3. Deploy the instance and install the ssm agent. You can either install the agent by passing user-data or manually afterwards. It's a a simple rpm package.</span><br />
<span style="font-family: inherit;"><br style="background-color: white; line-height: 18px;" /></span>
<span style="background-color: white; font-family: inherit; line-height: 18px;">4. Create a policy document, mine was:</span><br />
<pre style="background-color: #eeeeee; border: 1px dashed rgb(153, 153, 153); color: black; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"><span style="font-family: inherit;"> <code style="color: black; word-wrap: normal;">
{
"schemaVersion": "1.2",
"description": "Check ip configuration of a Linux instance.",
"parameters": {
},
"runtimeConfig": {
"aws:runShellScript": {
"properties": [
{
"id": "0.aws:runShellScript",
"runCommand":
}
]
}
}
}
</code></span></pre>
<span style="font-family: inherit;"><br /></span>
<span style="font-family: inherit;">From the examples here: <a class="jive-link-external" href="http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/create-ssm-doc.html" style="color: #996633; text-decoration: none;">http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/create-ssm-doc.html</a></span><br />
<span style="font-family: inherit;"></span>
<a name='more'></a><br />
<span style="font-family: inherit;">5. Associate the ssm document to the instance:</span><br />
<pre style="background-color: #eeeeee; border: 1px dashed rgb(153, 153, 153); color: black; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"><span style="font-family: inherit;"> <code style="color: black; word-wrap: normal;">
aws ssm create-association --instance-id i-9f4ba703 --name Test-Document-Scollier-Delete
</code></span></pre>
<span style="font-family: inherit;"><br /></span>
<span style="font-family: inherit;">6. Run the command:</span><br />
<pre style="background-color: #eeeeee; border: 1px dashed rgb(153, 153, 153); color: black; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"><span style="font-family: inherit;"> <code style="color: black; word-wrap: normal;">
$ aws ssm send-command --document-name "Test-Document-Scollier-Delete" --instance-ids "i-9f4ba703" --region us-east-1
{
"Command": {
"Status": "Pending",
"ExpiresAfter": 1464091829.69,
"Parameters": {},
"DocumentName": "Test-Document-Scollier-Delete",
"InstanceIds": [
"i-9f4ba703"
],
"CommandId": "db1bcbbc-556a-48a3-bcc1-0bc5bb88c2f8",
"RequestedDateTime": 1464091229.69
}
</code></span></pre>
<span style="font-family: inherit;"><br /></span>
<span style="font-family: inherit;"><br /></span>
<span style="font-family: inherit;">7. Then you can check the output either via CLI or in the AWS console. It's really as simple as that.</span><br />
<span style="font-family: inherit;"><br /></span>
<span style="font-family: inherit;">Resources I used:</span><br />
<span style="font-family: inherit;"><br /></span>
<span style="font-family: inherit;"><a class="jive-link-external" href="http://www.awsomeblog.com/amazon-ec2-simple-systems-manager/" style="color: #996633; text-decoration: none;">http://www.awsomeblog.com/amazon-ec2-simple-systems-manager/</a></span><br />
<span style="font-family: inherit;"><a class="jive-link-external" href="http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/walkthrough-cli.html" style="color: #996633; text-decoration: none;">http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/walkthrough-cli.html</a></span><br />
<span style="color: #996633; font-family: inherit; text-decoration: none;"><a class="jive-link-external" href="http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/execute-remote-commands.html" style="color: #996633; text-decoration: none;">http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/execute-remote-commands.html</a></span><br />
<br />
I was looking for different use cases, and <a href="https://twitter.com/davdunc" target="_blank">David Duncan</a> summed it up quite nicely here as a reply to one of my tweets:<br />
<br />
<br />
<blockquote class="twitter-tweet" data-conversation="none" data-lang="en">
<div dir="ltr" lang="en">
<a href="https://twitter.com/collier_s">@collier_s</a> ssm run command has policy-driven power. Run your own scripts or ones shared with you, it has real <a href="https://twitter.com/hashtag/community?src=hash">#community</a> potential</div>
— David Duncan (@davdunc) <a href="https://twitter.com/davdunc/status/735101837590691840">May 24, 2016</a></blockquote>
<script async="" charset="utf-8" src="//platform.twitter.com/widgets.js"></script>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-2177586379122489026.post-25852638510492158402016-05-22T09:10:00.000-05:002016-05-22T11:31:12.311-05:00Amazon Web Services Command Line Interface (AWS CLI) - Cheat SheetI have been standing up quite a bit of infrastructure in AWS lately using the AWS CLI. Here are some commands that I found helpful in a cheat sheet format. I'll show you how to create resources, query resources for information and how to update resources. Hopefully this will get you started quickly. The cheat sheet covers the following topics:<br />
<br />
<ul>
<li>Setting up your environment.</li>
<li>Working with Virtual Private Clouds (VPC).</li>
<li>Working with Identity and Access Management (IAM).</li>
<li>Working with Route53.</li>
<li>Working with Elastic Load Balancers (ELB).</li>
<li>Working with SSH.</li>
<li>Working with DHCP.</li>
<li>Working with Elastic Compute Cloud (EC2).</li>
<li>Utilizing queries to gather information.</li>
</ul>
<span style="text-align: center;"></span><br />
<div>
<span style="text-align: center;"><span style="text-align: center;"><a href="https://drive.google.com/uc?export=download&id=0B-Ri5bKnM_QHQlRGczVvQ05VQlU" onclick="ga('send', 'event', { eventCategory: 'AWS_CLI_CS_Download', eventAction: 'AWS_CLI_CS_Download'});" target="_blank">You can download the AWS CLI cheat sheet here.</a></span></span><br />
<span style="text-align: center;"><span style="text-align: center;"><br /></span></span></div>
<span style="text-align: center;">
You can preview the AWS CLI cheat sheet by clicking below (hover mouse over upper right corner):</span><br />
<div>
<div style="text-align: center;">
<br /></div>
<div>
<div style="text-align: center;">
<center>
<iframe height="300" onclick="ga('send', 'event', { eventCategory: 'AWS_CLI_CS_Download', eventAction: 'AWS_CLI_CS_Download'});" src="https://drive.google.com/file/d/0B-Ri5bKnM_QHQlRGczVvQ05VQlU/preview" width="400"></iframe>
</center>
</div>
<div>
<br /></div>
<div>
<br />
You can test all these commands with Fedora images which can be launched here: <a href="https://getfedora.org/cloud/download/">https://getfedora.org/cloud/download/</a>.</div>
<div>
<br /></div>
<div>
If you have any questions about any of the commands in particular, please drop a comment below and I'll try to help. Much credit goes to <a href="https://twitter.com/cooktheryan" target="_blank">Ryan Cook</a> for frontloading a lot of this.</div>
</div>
</div>
Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-2177586379122489026.post-90025689550020479732016-04-06T20:40:00.001-05:002016-04-06T20:43:18.569-05:00Grabbing a list of VMs from RHEV and SortingSimple post, but I thought it'd be worth sharing since I burned a day on it. The goal was to find out which VMs on our RHEV environment were old and unused. So I decided to use the RHEV-M API to grab the list, and sort it. The only thing you need is the CA Cert for your RHEV-M environment.<br />
<br />
https://gist.github.com/scollier/890159751cf04cee67c815d29284dc2b<br />
<br />
Script here:<br />
<br />
<pre style="background-color: #eeeeee; border: 1px dashed #999999; color: black; font-family: Andale Mono, Lucida Console, Monaco, fixed, monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"> <code style="color: black; word-wrap: normal;">
#!/bin/bash
# Set the variables for date, and argument
DATE=$(date +"%m_%d_%Y-%M")
INPUT="$1"
# Grab the password for RHEV-M, don't report it to std out.
echo
echo "Please provide the RHEVM password, password is not echoed out to stdout, enter password and press Enter."
read -p "Enter Password:" -s RHEVM_PASSWORD
echo
# Grab the xml report of all the VMs
curl -s -X GET -H "Accept: application/xml" -u "admin@internal:$RHEVM_PASSWORD" --cacert rhevm.cer https://your.domain.here.com/api/vms > vm-output-$DATE.xml
# Parse the xml output and look for the name of the VM, and the stop time of the VM, put it in a separate file.
xpath vm-output-$DATE.xml '/vms/vm/name | /vms/vm/stop_time' > vm-output-$DATE-formatted.xml 2> /dev/null
# Clean up the file here. joherr helped out with this. Place line breaks after each </stop_time> xml tag, and format it so it's readable in two columns.
sed -e 's/<\/name><stop_time>/ /g' \
-e 's/<\/stop_time><name>/\n/g' \
-e 's/<name>//g' \
-e 's/<\/stop_time>//g' vm-output-$DATE-formatted.xml | \
sort -k 2 | \
awk 'BEGIN { format = "%-60s %s\n"
printf format, "VMs", "Date Stopped"
printf format, "----------", "----------" }
{ printf format, $1, $2 }' > rhevm-vms-$DATE
# By default, output the number of VMs that are listed.
echo
echo "There are $(cat rhevm-vms-$DATE | wc -l) VMs now."
echo
# If it's run with a -p, ouput the entire list and sort by oldest first.
case $INPUT in
-p|--print)
cat rhevm-vms-$DATE
shift # past argument
;;
esac
shift # past argument or value
</code></pre>
<div>
<br /></div>
Output here:
<br />
<pre style="background-color: #eeeeee; border: 1px dashed #999999; color: black; font-family: Andale Mono, Lucida Console, Monaco, fixed, monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"> <code style="color: black; word-wrap: normal;">
VMs Date Stopped
---------- ----------
dh-ose-node2 2014-10-23T16:42:27.045-05:00
ospceph-sft 2014-11-10T21:01:23.524-06:00
dh-ose-broker 2014-11-11T16:32:59.985-06:00
ks-sft-test1 2014-11-13T21:00:02.828-06:00
dh-ose-node1 2014-11-24T19:02:53.995-06:00
collier-atomic-pxe 2014-12-18T15:01:45.325-06:00
sat6-pxe-rhel7 2015-03-05T10:54:13.907-06:00
sat6-pxe-rhel6 2015-03-05T10:54:14.401-06:00
rhel-atomic-7.1-GA-mjenner 2015-03-05T10:54:14.489-06:00
workstation-goern-1 2015-04-16T07:59:47.704-05:00
RHEL-Atomic-Test-Sat6 2015-05-29T11:42:58.093-05:00
hk-nfv 2015-09-29T16:36:00.975-05:00
ks-back 2015-09-29T16:36:01.851-05:00
rhel-atomic-mjenner 2015-09-29T16:36:02.026-05:00
dellaccess 2015-09-29T16:36:02.785-05:00
collier-atomic-pxe-1 2015-09-29T16:36:03.663-05:00
<snip>....<snip>....</snip></code></pre>
<br />
Now I have a decent idea of what VMs are out there, which ones haven't been powered on for months, and are candidates for deletion. Hope this helps.
Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-2177586379122489026.post-42207718435961051442015-07-08T11:24:00.002-05:002015-07-09T17:23:00.907-05:00Configure a Highly Available Kubernetes / etcd Cluster with Pacemaker on FedoraI'm going to share some of the great work that <a href="https://twitter.com/spinningmatt" target="_blank">Matt Farrellee</a>, <a href="https://github.com/rrati" target="_blank">Rob Rati</a> and <a href="https://twitter.com/timothysc" target="_blank">Tim St. Clair</a> have done with regards to figuring out $TOPIC - they get full credit for the technical details here. It's really interesting work and I thought I'd share it with the upstream community. Not to mention it gives me an opportunity to learn how this is all set up and configured.<br />
<br />
In this configuration I will set up 5 virtual machines and one VIP:<br />
<br />
fed-master1.example.com 192.168.123.100<br />
fed-master2.example.com 192.168.123.101<br />
fed-master3.example.com 192.168.123.102<br />
fed-node1.example.com 192.168.123.103<br />
fed-node2.example.com 192.168.123.104<br />
fed-vip.example.com 192.168.123.105<br />
<br />
If you are wondering how I set up this environment quickly and repetitively, check out <a href="https://github.com/purpleidea/oh-my-vagrant" target="_blank">omv</a> from <a href="https://ttboj.wordpress.com/" target="_blank">Purpleidea</a>. He's a clever guy with a great dev workflow. In particular, have a look at the <a href="https://ttboj.wordpress.com/2015/07/08/oh-my-vagrant-mainstream-mode-and-copr-rpms/" target="_blank">work he has done</a> to put his great code into a package to make distribution easier. <br />
<br />
In summary here, I used Vagrant, KVM and omv to build and destroy this environment. I won't go into to many details about how that all works, but feel free to ask questions in the comments if needed. My omv.yaml file is located <a href="https://github.com/scollier/kube-ha/tree/master/omv" target="_blank">here</a>, this might help you get up and running quickly. Just make sure you have a Fedora 22 Vagrant box that matches the name in the file. Yup, I run it all on my laptop.<br />
<br />
Global configuration:<br />
<br />
<ul>
<li>Configure /etc/hosts on all nodes so that name resolution works (omv can help here)</li>
<li>Share SSH key from master to all other nodes</li>
</ul>
<br />
<a name='more'></a><br />
<br />
To summarize what this environment will look like and what components will be running where, I have 3 master servers which will be running the <a href="http://clusterlabs.org/" target="_blank">pacemaker</a> components as well as etcd and <a href="https://github.com/googlecloudplatform/kubernetes" target="_blank">kubernetes</a> master node services. I have 2 nodes which will be running <a href="https://github.com/coreos/flannel" target="_blank">flanneld</a> and the kubernetes worker node services. These 2 nodes will also be running <a href="https://www.docker.com/" target="_blank">Docker</a>. When I'm mentioning commands below, you can assume that I want them to be run on each group of nodes, unless I specify otherwise. The overall flow of the configuration will be:<br />
<br />
<ul>
<li>Deploy VMs</li>
<li>Install Software</li>
<li>Configure etcd</li>
<li>Configure flannel</li>
<li>Configure kubernetes</li>
<li>Configure pacemaker</li>
<li>Confirm functionality</li>
</ul>
<br />
By the time you are finished you should have a highly available Active / Passive cluster configuration running kubernetes and all the required components.<br />
<br />
Okay, so, put on your helmet and let's get started here. <br />
<br />
<b>Installing Software: </b><br />
<br />
Here we just need to make sure we have the appropriate packages on each node. I've listed the versions that I used for this configuration at the end of the article.<br />
<br />
Execute the following on each master nodes:<br />
<br />
<pre style="background-color: #eeeeee; border: 1px dashed #999999; color: black; font-family: Andale Mono, Lucida Console, Monaco, fixed, monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"> <code style="color: black; word-wrap: normal;">
# yum -y install etcd kubernetes-master pcs fence-agents-all
</code></pre>
Execute the following on each worker nodes:<br />
<div>
<br /></div>
<pre style="background-color: #eeeeee; border: 1px dashed #999999; color: black; font-family: Andale Mono, Lucida Console, Monaco, fixed, monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"> <code style="color: black; word-wrap: normal;">
# yum -y install kubernetes-node docker flannel
</code></pre>
<br />
<b>Configure etcd:</b><br />
<b><br /></b>
Our key value store for configuration is going to be etcd. In this case, we are creating an etcd cluster so we have a highly available deployment. The config file and script for this is on github <a href="https://github.com/scollier/kube-ha/tree/master/cluster-etcd" target="_blank">here</a> and <a href="https://github.com/scollier/kube-ha/tree/master/scripts" target="_blank">here</a>.<br />
<b><br /></b>
Create the following script (also in github) and run it from master1:<br />
<br />
<pre style="background-color: #eeeeee; border: 1px dashed #999999; color: black; font-family: Andale Mono, Lucida Console, Monaco, fixed, monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"> <code style="color: black; word-wrap: normal;">
etcd0=192.168.123.100
etcd1=192.168.123.101
etcd2=192.168.123.102
INITIAL_CLUSTER="etcd0=http://$etcd0:2380,etcd1=http://$etcd1:2380,etcd2=http://$etcd2:2380"
for name in etcd0 etcd1 etcd2; do
ssh -t ${!name} \
sed -i -e "s#.*ETCD_NAME=.*#ETCD_NAME=$name#" \
-e "s#.*ETCD_INITIAL_ADVERTISE_PEER_URLS=.*#ETCD_INITIAL_ADVERTISE_PEER_URLS=http://${!name}:2380#" \
-e "s#.*ETCD_LISTEN_PEER_URLS=.*#ETCD_LISTEN_PEER_URLS=http://${!name}:2380#" \
-e "s#.*ETCD_LISTEN_CLIENT_URLS=.*#ETCD_LISTEN_CLIENT_URLS=http://${!name}:2379,http://127.0.0.1:2379,http://127.0.0.1:4001#" \
-e "s#.*ETCD_ADVERTISE_CLIENT_URLS=.*#ETCD_ADVERTISE_CLIENT_URLS=http://${!name}:2379#" \
-e "s#.*ETCD_INITIAL_CLUSTER=.*#ETCD_INITIAL_CLUSTER=$INITIAL_CLUSTER#" \
/etc/etcd/etcd.conf
done
</code></pre>
<br />
Execute the following on all masters:<br />
<div>
<br /></div>
<div>
<pre style="background-color: #eeeeee; border: 1px dashed #999999; color: black; font-family: Andale Mono, Lucida Console, Monaco, fixed, monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"> <code style="color: black; word-wrap: normal;">
# systemctl enable etcd; systemctl start etcd; systemctl status etcd
# etcdctl cluster-health; etcdctl member list
</code></pre>
<br />
Also, check out the /etc/etcd/etcd.conf file and journal, etc... Check that out on each master and get familiar with how etcd is configured.<br />
<br />
<b>Configure Flannel:</b><br />
<b><br /></b>
We use flannel so that container A on host A can talk to container A on host B. It provides and overlay network that the containers and kubernetes can take advantage of. Oh, and it's really easy to configure. An example /etc/sysconfig/flanneld config file is on my <a href="https://github.com/scollier/kube-ha/tree/master/flannel" target="_blank">github</a> repo.<br />
<b><br /></b>
Execute the following on the worker nodes:<br />
<div>
<br /></div>
<pre style="background-color: #eeeeee; border: 1px dashed #999999; color: black; font-family: Andale Mono, Lucida Console, Monaco, fixed, monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"> <code style="color: black; word-wrap: normal;">
# echo FLANNEL_ETCD="http://192.168.123.100:2379,http://192.168.123.101:2379,http://192.168.123.102:2379" >> /etc/sysconfig/flanneld
# systemctl enable flanneld; systemctl start flanneld; systemctl status flanneld
# systemctl enable docker; systemctl start docker
# reboot
</code></pre>
<br />
When the servers come back up, confirm that the flannel and docker interfaces are on the same subnet.<br />
<br />
<b>Configure kubernetes:</b><br />
<br />
Kubernetes will be our container orchestration layer. I wont' get to much into the details of the different kubernetes services, or even usage for that matter. I can assure you it is well documented and you might want to have a look <a href="https://github.com/GoogleCloudPlatform/kubernetes/tree/master/docs" target="_blank">here</a> and <a href="https://github.com/GoogleCloudPlatform/kubernetes/tree/master/examples" target="_blank">here</a>. I have posted my complete kubernetes config files <a href="https://github.com/scollier/kube-ha/tree/master/k8s" target="_blank">here</a>.<br />
<br />
Execute the following on the master nodes:<br />
<div>
<br /></div>
<pre style="background-color: #eeeeee; border: 1px dashed #999999; color: black; font-family: Andale Mono, Lucida Console, Monaco, fixed, monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"> <code style="color: black; word-wrap: normal;">
# echo KUBE_API_ADDRESS=--address=0.0.0.0 >> /etc/kubernetes/apiserver
</code></pre>
<br />
You can see my kubernetes master config files here.<br />
<div>
<br /></div>
Execute the following on the worker nodes:<br />
<br />
<pre style="background-color: #eeeeee; border: 1px dashed #999999; color: black; font-family: Andale Mono, Lucida Console, Monaco, fixed, monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"> <code style="color: black; word-wrap: normal;">
# echo KUBE_MASTER=”--master=192.168.123.105:8080” >> /etc/kubernetes/config
# echo KUBELET_ADDRESS=”--address=0.0.0.0” >> /etc/kubernetes/kubelet
# echo KUBELET_HOSTNAME= >> /etc/kubernetes/kubelet
# echo KUBELET_ARGS=”--register-node=true” >> /etc/kubernetes/kubelet
</code></pre>
Keep in mind here that the .105 address is the VIP listed in the table at the beginning of the article.<br />
<br />
In addition, on the kubelet, you'll want to comment out the line for KUBELET_HOSTNAME, so that when it checks in with the master, it uses it's true hostname.<br />
<br />
You can see my kubernetes node config files here.<br />
<br />
<b>Configure Pacemaker:</b><br />
<br />
Pacemaker is going to provide our HA mechanism. You can find more information about configuring Pacemaker on the <a href="http://clusterlabs.org/doc/en-US/Pacemaker/1.1-pcs/html/Clusters_from_Scratch/pref-Clusters_from_Scratch-Preface.html" target="_blank">Clusters from Scratch</a> page of their website. My /etc/corosync/corosync.conf file is posted on github <a href="https://github.com/scollier/kube-ha/tree/master/corosync" target="_blank">here</a>.<br />
<br />
Execute the following on all masters:
<br />
<br />
This command will set the password for the hacluster user in order for cluster auth to function properly.
<br />
<pre style="background-color: #eeeeee; border: 1px dashed #999999; color: black; font-family: Andale Mono, Lucida Console, Monaco, fixed, monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"> <code style="color: black; word-wrap: normal;">
# echo hacluster | passwd -f --stdin hacluster
</code></pre>
Execute the following on master1:<br />
<pre style="background-color: #eeeeee; border: 1px dashed #999999; color: black; font-family: Andale Mono, Lucida Console, Monaco, fixed, monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"> <code style="color: black; word-wrap: normal;">
# pcs cluster auth -u hacluster -p hacluster fed-master1.example.com fed-master2.example.com fed-master3.example.com
# pcs cluster setup --start --name high-availability-kubernetes fed-master1.example.com fed-master2.example.com fed-master3.example.com
# pcs resource create virtual-ip IPaddr2 ip=192.168.123.105 --group master
# pcs resource create apiserver systemd:kube-apiserver --group master
# pcs resource create scheduler systemd:kube-scheduler --group master
# pcs resource create controller systemd:kube-controller-manager --group master
# pcs property set stonith-enabled=false
</code></pre>
Check the status of the cluster:
<br />
<pre style="background-color: #eeeeee; border: 1px dashed #999999; color: black; font-family: Andale Mono, Lucida Console, Monaco, fixed, monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"> <code style="color: black; word-wrap: normal;">
# pcs status
# pcs cluster auth
</code></pre>
<br />
<b>Confirm functionality:</b><br />
<b><br /></b>
Here we'll want to make sure everything is working.<br />
<br />
You can check that kubernetes is functioning by making a call to the VIP, which will point to the active instance of the kubernetes API server.<br />
<br />
Execute the following on any master node:<br />
<br />
<pre style="background-color: #eeeeee; border: 1px dashed #999999; color: black; font-family: Andale Mono, Lucida Console, Monaco, fixed, monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"> <code style="color: black; word-wrap: normal;">
# kubectl -s http://192.168.123.105:8080 get nodes
NAME LABELS STATUS
fed-node1 kubernetes.io/hostname=fed-node1 Ready
fed-node2 kubernetes.io/hostname=fed-node2 Ready
</code></pre>
<br />
Execute the following on any master node:
<br />
<pre style="background-color: #eeeeee; border: 1px dashed #999999; color: black; font-family: Andale Mono, Lucida Console, Monaco, fixed, monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"> <code style="color: black; word-wrap: normal;">
# pcs status
Cluster name: high-availability-kubernetes
Last updated: Wed Jul 8 15:21:35 2015
Last change: Wed Jul 8 12:38:32 2015
Stack: corosync
Current DC: fed-master1.example.com (1) - partition with quorum
Version: 1.1.12-a9c8177
3 Nodes configured
4 Resources configured
Online: [ fed-master1.example.com fed-master2.example.com fed-master3.example.com ]
Full list of resources:
Resource Group: master
virtual-ip<span class="Apple-tab-span" style="white-space: pre;"> </span>(ocf::heartbeat:IPaddr2):<span class="Apple-tab-span" style="white-space: pre;"> </span>Started fed-master1.example.com
apiserver<span class="Apple-tab-span" style="white-space: pre;"> </span>(systemd:kube-apiserver):<span class="Apple-tab-span" style="white-space: pre;"> </span>Started fed-master1.example.com
scheduler<span class="Apple-tab-span" style="white-space: pre;"> </span>(systemd:kube-scheduler):<span class="Apple-tab-span" style="white-space: pre;"> </span>Started fed-master1.example.com
controller<span class="Apple-tab-span" style="white-space: pre;"> </span>(systemd:kube-controller-manager):<span class="Apple-tab-span" style="white-space: pre;"> </span>Started fed-master1.example.com
PCSD Status:
fed-master1.example.com: Online
fed-master2.example.com: Online
fed-master3.example.com: Online
<div>
Daemon Status:
corosync: active/disabled
pacemaker: active/disabled
pcsd: active/enabled
</div>
</code></pre>
You can see that everything is up and running. It shows that the resource group is running on fed-master1.example.com. Well, we might as well place that in standby and make sure it starts on another node and that we can still execute kubernetes commands.</div>
<div>
<br /></div>
<div>
<pre style="background-color: #eeeeee; border: 1px dashed #999999; color: black; font-family: Andale Mono, Lucida Console, Monaco, fixed, monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"> <code style="color: black; word-wrap: normal;">
# pcs cluster standby fed-master1.example.com</code></pre>
</div>
<div>
<br /></div>
<div>
Now, check the resources again:
<br />
<pre style="background-color: #eeeeee; border: 1px dashed #999999; color: black; font-family: Andale Mono, Lucida Console, Monaco, fixed, monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"> <code style="color: black; word-wrap: normal;">
# pcs status
Cluster name: high-availability-kubernetes
Last updated: Wed Jul 8 15:24:17 2015
Last change: Wed Jul 8 15:23:59 2015
Stack: corosync
Current DC: fed-master1.example.com (1) - partition with quorum
Version: 1.1.12-a9c8177
3 Nodes configured
4 Resources configured
Node fed-master1.example.com (1): standby
Online: [ fed-master2.example.com fed-master3.example.com ]
Full list of resources:
Resource Group: master
virtual-ip<span class="Apple-tab-span" style="white-space: pre;"> </span>(ocf::heartbeat:IPaddr2):<span class="Apple-tab-span" style="white-space: pre;"> </span>Started fed-master2.example.com
apiserver<span class="Apple-tab-span" style="white-space: pre;"> </span>(systemd:kube-apiserver):<span class="Apple-tab-span" style="white-space: pre;"> </span>Started fed-master2.example.com
scheduler<span class="Apple-tab-span" style="white-space: pre;"> </span>(systemd:kube-scheduler):<span class="Apple-tab-span" style="white-space: pre;"> </span>Started fed-master2.example.com
controller<span class="Apple-tab-span" style="white-space: pre;"> </span>(systemd:kube-controller-manager):<span class="Apple-tab-span" style="white-space: pre;"> </span>Started fed-master2.example.com
PCSD Status:
fed-master1.example.com: Online
fed-master2.example.com: Online
fed-master3.example.com: Online
Daemon Status:
corosync: active/disabled
pacemaker: active/disabled
pcsd: active/enabled
</code></pre>
You can see that it moved over to fed-master2.example.com. Now, can I still get node status?
<br />
<pre style="background-color: #eeeeee; border: 1px dashed #999999; color: black; font-family: Andale Mono, Lucida Console, Monaco, fixed, monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"> <code style="color: black; word-wrap: normal;">
# kubectl -s http://192.168.123.105:8080 get nodes
NAME LABELS STATUS
fed-node1 kubernetes.io/hostname=fed-node1 Ready
fed-node2 kubernetes.io/hostname=fed-node2 Ready
</code></pre>
<br />
Yes. I can. So, enjoy. Maybe deploy some kubernetes apps?
<br />
<br />
<b>Package versions:</b><br />
<b><br /></b>
This tech changes quickly, so for reference, here's what I used to set this all up.<br />
<b><br /></b>
Master nodes:<br />
<br />
<pre style="background-color: #eeeeee; border: 1px dashed #999999; color: black; font-family: Andale Mono, Lucida Console, Monaco, fixed, monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"> <code style="color: black; word-wrap: normal;">
# rpm -qa selinux* kubernetes-master etcd fence-agents-all
fence-agents-all-4.0.16-1.fc22.x86_64
kubernetes-master-0.19.0-0.7.gitb2e9fed.fc22.x86_64
etcd-2.0.11-2.fc22.x86_64
selinux-policy-3.13.1-128.2.fc22.noarch
selinux-policy-targeted-3.13.1-128.2.fc22.noarch
</code></pre>
<br />
Worker nodes:<br />
<br />
<pre style="background-color: #eeeeee; border: 1px dashed #999999; color: black; font-family: Andale Mono, Lucida Console, Monaco, fixed, monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"> <code style="color: black; word-wrap: normal;">
# rpm -qa kubernetes-node docker flannel selinux*
selinux-policy-3.13.1-128.2.fc22.noarch
selinux-policy-targeted-3.13.1-128.2.fc22.noarch
kubernetes-node-0.19.0-0.7.gitb2e9fed.fc22.x86_64
docker-1.6.0-3.git9d26a07.fc22.x86_64
flannel-0.2.0-7.fc22.x86_64
</code></pre>
<br />
<br />
And that concludes this article. I hope it was helpful. Feel free to leave some comments or suggestions. It would be cool to containerize Pacemaker and get this running on a Fedora Atomic host.<br />
<br />
<br /></div>
Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-2177586379122489026.post-57013123238725354792015-06-30T12:54:00.000-05:002015-06-30T13:10:43.982-05:00Running Kubernetes in Offline ModeHere I'll talk about how to run kubernetes on a flight that doesn't have wifi... or, Red Hat Summit hands on lab that is completely disconnected. In either case, to set some context, this is useful for me while I'm running on a single host kubernetes configuration for a lab or development where network access is limited or non-existent.<br />
<br />
The issue is that K8s tries to pull the pause container whenever it launches a pod. As such, it tries to connect to gcr.io and make a connection to download the pause image. The gcr.io is the Google Container Registry. When you are in a disconnected environment this will cause the pod to enter a state of pending until it can pull down the pause container. <br />
<br />
Here's what you can do to bypass that - at least the only thing I know you can do: pull the pause container ahead of time. It helps if you know you'll be in an environment with limited access ahead of time. <br />
<br />
<pre style="background-color: #eeeeee; border: 1px dashed #999999; color: black; font-family: Andale Mono, Lucida Console, Monaco, fixed, monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"> <code style="color: black; word-wrap: normal;">
# docker pull gcr.io/google_containers/pause
Trying to pull repository gcr.io/google_containers/pause ...
6c4579af347b: Download complete
511136ea3c5a: Download complete
e244e638e26e: Download complete
Status: Downloaded newer image for gcr.io/google_containers/pause:latest
# docker images
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
fedora/apache latest 1eff270e703a 7 days ago 649.7 MB
gcr.io/google_containers/pause 1.0 6c4579af347b 11 months ago 239.8 kB
gcr.io/google_containers/pause go 6c4579af347b 11 months ago 239.8 kB
gcr.io/google_containers/pause latest 6c4579af347b 11 months ago 239.8 kB
</code></pre>
<br />
<br />
<a name='more'></a><br /><br />
Now try to launch a pod:<br />
<br />
<pre style="background-color: #eeeeee; border: 1px dashed #999999; color: black; font-family: Andale Mono, Lucida Console, Monaco, fixed, monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"> <code style="color: black; word-wrap: normal;">
# kubectl create -f apache.json
# kubectl get pods
POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS
apache my-fedora-apache fedora/apache 127.0.0.1/ name=apache Pending
</code></pre>
<br />
The pod is in pending state. You will see the following error if you check the log files.<br />
<br />
<pre style="background-color: #eeeeee; border: 1px dashed #999999; color: black; font-family: Andale Mono, Lucida Console, Monaco, fixed, monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"> <code style="color: black; word-wrap: normal;">
# journalctl -fl -u kube-apiserver.service -u kube-controller-manager.service -u kube-proxy.service -u kube-scheduler.service -u kubelet.service -u etcd -u docker
<snip>
Jun 30 17:29:11 localhost.localdomain docker[978]: time="2015-06-30T17:29:11Z" level="info" msg="-job pull(docker.io/kubernetes/pause, latest) = ERR (1)"
Jun 30 17:29:11 localhost.localdomain kubelet[1544]: E0630 17:29:11.946950 1544 kubelet.go:1002] Failed to introspect network container: Get https://index.docker.io/v1/repositories/kubernetes/pause/images: dial tcp: lookup index.docker.io: no such host; Skipping pod "apache.default.etcd"
<snip>
</code></pre>
<br />
<br />
You'll now need to tag it such that kubernetes realizes that it's local and is able to pull it.<br />
<br />
<pre style="background-color: #eeeeee; border: 1px dashed #999999; color: black; font-family: Andale Mono, Lucida Console, Monaco, fixed, monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"> <code style="color: black; word-wrap: normal;">
# docker tag gcr.io/google_containers/pause docker.io/kubernetes/pause
# docker images
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
fedora/apache latest 1eff270e703a 7 days ago 649.7 MB
gcr.io/google_containers/pause 1.0 6c4579af347b 11 months ago 239.8 kB
gcr.io/google_containers/pause go 6c4579af347b 11 months ago 239.8 kB
gcr.io/google_containers/pause latest 6c4579af347b 11 months ago 239.8 kB
kubernetes/pause latest 6c4579af347b 11 months ago 239.8 kB
</code></pre>
<br />
<br />
At this point, you should be funtional. <br />
<br />
<pre style="background-color: #eeeeee; border: 1px dashed #999999; color: black; font-family: Andale Mono, Lucida Console, Monaco, fixed, monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"> <code style="color: black; word-wrap: normal;">
# kubectl get pods
POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS
apache 172.17.0.2 my-fedora-apache fedora/apache 127.0.0.1/ name=apache Running
</code></pre>
<br />
<br />
You don't need to re-deploy the pod. K8s will pick up on the available pause image and launch the contianer correctly.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-2177586379122489026.post-6555000408580631192015-06-27T21:43:00.002-05:002015-06-29T16:35:59.283-05:00Extending Storage on an Fedora Atomic HostI had to spend some time understanding how to use docker-storage-setup on an Atomic host. The tool docker-storage-setup comes by default and makes the configuration of storage on your Atomic host easier. I didn't read any of the provided documentation (although that probably would have helped) other than the script itself. So, pardon me if this is a duplicate of other info out there. It was a great way to learn more about it. The goal here is to add more disk space to an Atomic host. By default, the cloud image that you download has one device (vda) that is 6GB in size. When I'm testing many, many docker builds and iterating through the Fedora-Dockerfiles repo, that's just not enough space. So, I need to know how to expand it.<br />
<br />
To provide some context about my environment, I'm using a local KVM environment to hack around in. The first thing I'll do is go ahead and add a few extra disks to my environment so I can do some testing of docker-storage-setup. Here is what we will be modifying on our running Atomic VM:<br />
<br />
My VM is called: atomic1<br />
New disk 1: vdb (logical name presented to VM)<br />
New disk 2: vdc (logical name presented to VM)<br />
New disk 3: vdd (logical name presented to VM)<br />
<br />
As with anything you do regarding storage, make sure you have a backup.<br />
<br />
Here is what it looks like on the Atomic VM before I add my disks:<br />
<br />
<pre style="background-color: #eeeeee; border: 1px dashed #999999; color: black; font-family: Andale Mono, Lucida Console, Monaco, fixed, monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"> <code style="color: black; word-wrap: normal;">
# atomic host status
TIMESTAMP (UTC) VERSION ID OSNAME REFSPEC
* 2015-06-27 20:22:47 22.50 0eca6e0777 fedora-atomic fedora-atomic:fedora-atomic/f22/x86_64/docker-host
2015-05-21 19:01:46 22.17 06a63ecfcf fedora-atomic fedora-atomic:fedora-atomic/f22/x86_64/docker-host
# fdisk -l | grep vd
Disk /dev/vda: 6 GiB, 6442450944 bytes, 12582912 sectors
/dev/vda1 * 2048 616447 614400 300M 83 Linux
/dev/vda2 616448 12582911 11966464 5.7G 8e Linux LVM
</code></pre>
<br />
<a name='more'></a><br />
<br />
As you can see, I only have a vda disk. I need to create 3 additional disks. I do this on my KVM hypervisor that I am running Atomic on. In my case it's a Fedora 21 host.<br />
<br />
<pre style="background-color: #eeeeee; border: 1px dashed #999999; color: black; font-family: Andale Mono, Lucida Console, Monaco, fixed, monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"> <code style="color: black; word-wrap: normal;">
# for i in $(seq 1 3); do qemu-img create -f qcow2 -o preallocation=metadata disk$i.qcow2 4G &> /dev/null; chown qemu.qemu disk$i.qcow2 && chmod 744 disk$i.qcow2; done && ls -ltr disk*
-rwxr--r--. 1 qemu qemu 4295884800 Jun 27 21:22 disk1.qcow2
-rwxr--r--. 1 qemu qemu 4295884800 Jun 27 21:22 disk2.qcow2
-rwxr--r--. 1 qemu qemu 4295884800 Jun 27 21:22 disk3.qcow2
</code></pre>
<br />
<br />
Now that I have 3 new disks, I want to attach them to my running Atomic VM. Note that I am starting with vdb because the VM already has a vda.<br />
<br />
<pre style="background-color: #eeeeee; border: 1px dashed #999999; color: black; font-family: Andale Mono, Lucida Console, Monaco, fixed, monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"> <code style="color: black; word-wrap: normal;">
# virsh attach-disk atomic1 /extra/libvirt/images/disk1.qcow2 vdb --targetbus virtio --live
Disk attached successfully
# virsh attach-disk atomic1 /extra/libvirt/images/disk2.qcow2 vdc --targetbus virtio --live
Disk attached successfully
# virsh attach-disk atomic1 /extra/libvirt/images/disk3.qcow2 vdd --targetbus virtio --live
Disk attached successfully
</code></pre>
<br />
Now, back on the Atomic VM you can see the new disks. You don't need to partition them, or pvcreate them. The docker-storage-setup script will handle all that.<br />
<br />
<pre style="background-color: #eeeeee; border: 1px dashed #999999; color: black; font-family: Andale Mono, Lucida Console, Monaco, fixed, monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"> <code style="color: black; word-wrap: normal;">
# fdisk -l | grep vd
Disk /dev/vda: 6 GiB, 6442450944 bytes, 12582912 sectors
/dev/vda1 * 2048 616447 614400 300M 83 Linux
/dev/vda2 616448 12582911 11966464 5.7G 8e Linux LVM
Disk /dev/vdb: 4 GiB, 4295884800 bytes, 8390400 sectors
Disk /dev/vdc: 4 GiB, 4295884800 bytes, 8390400 sectors
Disk /dev/vdd: 4 GiB, 4295884800 bytes, 8390400 sectors
</code></pre>
<br />
<br />
Now, I can start playing around with docker-storage-setup. There's at least two scenarios that I want to evaluate. The first is adding a new disk to my host as a new phyiscal volume, VG and LV. I want that to be what Docker uses for storage. After that, I want to extend that volume with the other two disks. So, when I am finished, I will have a total of ~ 12GB of space for my Docker images. I can get instructions on how to do this by looking at the /bin/docker-storage-setup script. It says:<br />
<br />
<pre style="background-color: #eeeeee; border: 1px dashed #999999; color: black; font-family: Andale Mono, Lucida Console, Monaco, fixed, monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"> <code style="color: black; word-wrap: normal;">
# This section reads the config file (/
# Currently supported options:
# DEVS: A quoted, space-separated list of devices to be used. This currently
# expects the devices to be unpartitioned drives. If "VG" is not
# specified, then use of the root disk's extra space is implied.
#
# VG: The volume group to use for docker storage. Defaults to the volume
# group where the root filesystem resides. If VG is specified and the
# volume group does not exist, it will be created (which requires that
# "DEVS" be nonempty, since we don't currently support putting a second
# partition on the root disk).
</code></pre>
<br />
Let's take a look at the current configuration.<br />
<br />
<pre style="background-color: #eeeeee; border: 1px dashed #999999; color: black; font-family: Andale Mono, Lucida Console, Monaco, fixed, monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"> <code style="color: black; word-wrap: normal;">
# docker info
Containers: 0
Images: 0
Storage Driver: devicemapper
Pool Name: atomicos-docker--pool
Pool Blocksize: 65.54 kB
Backing Filesystem: xfs
Data file:
Metadata file:
Data Space Used: 11.8 MB
Data Space Total: 2.961 GB
Data Space Available: 2.949 GB
Metadata Space Used: 49.15 kB
Metadata Space Total: 8.389 MB
Metadata Space Available: 8.339 MB
Udev Sync Supported: true
Library Version: 1.02.93 (2015-01-30)
Execution Driver: native-0.2
Kernel Version: 4.0.6-300.fc22.x86_64
Operating System: Fedora 22 (Twenty Two)
CPUs: 2
Total Memory: 1.954 GiB
Name: atomic-00.localdomain
ID: GBO5:RZYO:SGIO:IVQ4:IGIL:E55A:3YGF:CUWZ:LAAV:6Z4P:2WAI:BPD3
</code></pre>
<br />
You can see that the Pool is atomicos-docker--pool. We want to change that.<br />
<br />
<h2>
Scenario 1</h2>
<br />
For the first scenario, I want to go ahead and add the initial disk. It's really, really easy.<br />
<br />
1. Check the configuration before making the changes.<br />
<br />
<pre style="background-color: #eeeeee; border: 1px dashed #999999; color: black; font-family: Andale Mono, Lucida Console, Monaco, fixed, monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"> <code style="color: black; word-wrap: normal;">
# pvs
PV VG Fmt Attr PSize PFree
/dev/vda2 atomicos lvm2 a-- 5.70g 0
# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
docker-pool atomicos twi-a-tz-- 2.76g 0.40 0.59
root atomicos -wi-ao---- 2.93g
# vgs
VG #PV #LV #SN Attr VSize VFree
atomicos 1 2 0 wz--n- 5.70g 0
</code></pre>
<br />
<br />
2. Create the file /etc/sysconfig/docker-storage-setup with the following entries.<br />
<br />
<pre style="background-color: #eeeeee; border: 1px dashed #999999; color: black; font-family: Andale Mono, Lucida Console, Monaco, fixed, monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"> <code style="color: black; word-wrap: normal;">
# cat /etc/sysconfig/docker-storage-setup
DEVS="vdb"
VG="test-disk"
</code></pre>
<br />
3. Run the command docker-storage-setup.<br />
<br />
<pre style="background-color: #eeeeee; border: 1px dashed #999999; color: black; font-family: Andale Mono, Lucida Console, Monaco, fixed, monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"> <code style="color: black; word-wrap: normal;">
# docker-storage-setup
Volume group "test-disk" not found
Cannot process volume group test-disk
0
Checking that no-one is using this disk right now ... OK
Disk /dev/vdb: 4 GiB, 4295884800 bytes, 8390400 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
>>> Script header accepted.
>>> Created a new DOS disklabel with disk identifier 0xe737456c.
Created a new partition 1 of type 'Linux LVM' and of size 4 GiB.
/dev/vdb2:
New situation:
Device Boot Start End Sectors Size Id Type
/dev/vdb1 2048 8390399 8388352 4G 8e Linux LVM
The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.
Physical volume "/dev/vdb1" successfully created
Volume group "test-disk" successfully created
NOCHANGE: partition 2 is size 11966464. it cannot be grown
Physical volume "/dev/vda2" changed
1 physical volume(s) resized / 0 physical volume(s) not resized
Rounding up size to full physical extent 8.00 MiB
Logical volume "docker-meta" created.
Logical volume "docker-data" created.
</code></pre>
<br />
4. Restart Docker to consume the new configuration.<br />
<br />
<pre style="background-color: #eeeeee; border: 1px dashed #999999; color: black; font-family: Andale Mono, Lucida Console, Monaco, fixed, monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"> <code style="color: black; word-wrap: normal;">
# systemctl restart docker
</code></pre>
<br />
5. Check the new configuration.<br />
<br />
<pre style="background-color: #eeeeee; border: 1px dashed #999999; color: black; font-family: Andale Mono, Lucida Console, Monaco, fixed, monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"> <code style="color: black; word-wrap: normal;">
# pvs
PV VG Fmt Attr PSize PFree
/dev/vda2 atomicos lvm2 a-- 5.70g 0
/dev/vdb1 test-disk lvm2 a-- 4.00g 84.00m
# vgs
VG #PV #LV #SN Attr VSize VFree
atomicos 1 2 0 wz--n- 5.70g 0
test-disk 1 2 0 wz--n- 4.00g 84.00m
# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
docker-pool atomicos twi-a-tz-- 2.76g 0.40 0.59
root atomicos -wi-ao---- 2.93g
docker-data test-disk -wi-ao---- 3.91g
docker-meta test-disk -wi-ao---- 8.00m
# docker info
Containers: 0
Images: 0
Storage Driver: devicemapper
Pool Name: docker-253:0-8473021-pool
Pool Blocksize: 65.54 kB
Backing Filesystem: xfs
Data file: /dev/test-disk/docker-data
Metadata file: /dev/test-disk/docker-meta
Data Space Used: 11.8 MB
Data Space Total: 4.194 GB
Data Space Available: 4.183 GB
Metadata Space Used: 53.25 kB
Metadata Space Total: 8.389 MB
Metadata Space Available: 8.335 MB
Udev Sync Supported: true
Library Version: 1.02.93 (2015-01-30)
Execution Driver: native-0.2
Kernel Version: 4.0.6-300.fc22.x86_64
Operating System: Fedora 22 (Twenty Two)
CPUs: 2
Total Memory: 1.954 GiB
Name: atomic-00.localdomain
ID: GBO5:RZYO:SGIO:IVQ4:IGIL:E55A:3YGF:CUWZ:LAAV:6Z4P:2WAI:BPD3
</code></pre>
<br />
Now you can see that we are using the new disk and we have a "Data Space Total" of 4GB. <br />
<br />
Before using this new storage, you will need to clean out /var/lib/docker and restart Docker. The reason for this is that we are going from one thin pool volume to another.<br />
<br />
<h2>
Scenario 2</h2>
<br />
For the second scenario, I want to extend that so we have more space for the data file. Again, really easy.<br />
<br />
1. Modify the /etc/sysconfig/docker file to add the two new disks and run docker-storage-setup.<br />
<br />
<pre style="background-color: #eeeeee; border: 1px dashed #999999; color: black; font-family: Andale Mono, Lucida Console, Monaco, fixed, monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"> <code style="color: black; word-wrap: normal;">
# cat /etc/sysconfig/docker-storage-setup
DEVS="vdc vdd"
VG="test-disk"
# docker-storage-setup
0
Checking that no-one is using this disk right now ... OK
Disk /dev/vdc: 4 GiB, 4295884800 bytes, 8390400 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
>>> Script header accepted.
>>> Created a new DOS disklabel with disk identifier 0x2bd6f997.
Created a new partition 1 of type 'Linux LVM' and of size 4 GiB.
/dev/vdc2:
New situation:
Device Boot Start End Sectors Size Id Type
/dev/vdc1 2048 8390399 8388352 4G 8e Linux LVM
The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.
Physical volume "/dev/vdc1" successfully created
0
Checking that no-one is using this disk right now ... OK
Disk /dev/vdd: 4 GiB, 4295884800 bytes, 8390400 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
>>> Script header accepted.
>>> Created a new DOS disklabel with disk identifier 0x4d2c0a6f.
Created a new partition 1 of type 'Linux LVM' and of size 4 GiB.
/dev/vdd2:
New situation:
Device Boot Start End Sectors Size Id Type
/dev/vdd1 2048 8390399 8388352 4G 8e Linux LVM
The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.
Physical volume "/dev/vdd1" successfully created
Volume group "test-disk" successfully extended
NOCHANGE: partition 2 is size 11966464. it cannot be grown
Physical volume "/dev/vda2" changed
1 physical volume(s) resized / 0 physical volume(s) not resized
Rounding size to boundary between physical extents: 16.00 MiB
Size of logical volume test-disk/docker-meta changed from 8.00 MiB (2 extents) to 16.00 MiB (4 extents).
Logical volume docker-meta successfully resized
Size of logical volume test-disk/docker-data changed from 3.91 GiB (1000 extents) to 11.81 GiB (3024 extents).
Logical volume docker-data successfully resized
</code></pre>
<br />
<br />
2. Now restart Docker and check the new configuration.<br />
<br />
<pre style="background-color: #eeeeee; border: 1px dashed #999999; color: black; font-family: Andale Mono, Lucida Console, Monaco, fixed, monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"> <code style="color: black; word-wrap: normal;">
# systemctl restart docker
# docker info
Containers: 0
Images: 0
Storage Driver: devicemapper
Pool Name: docker-253:0-8473021-pool
Pool Blocksize: 65.54 kB
Backing Filesystem: xfs
Data file: /dev/test-disk/docker-data
Metadata file: /dev/test-disk/docker-meta
Data Space Used: 11.8 MB
Data Space Total: 12.68 GB
Data Space Available: 12.67 GB
Metadata Space Used: 90.11 kB
Metadata Space Total: 16.78 MB
Metadata Space Available: 16.69 MB
Udev Sync Supported: true
Library Version: 1.02.93 (2015-01-30)
Execution Driver: native-0.2
Kernel Version: 4.0.6-300.fc22.x86_64
Operating System: Fedora 22 (Twenty Two)
CPUs: 2
Total Memory: 1.954 GiB
Name: atomic-00.localdomain
ID: GBO5:RZYO:SGIO:IVQ4:IGIL:E55A:3YGF:CUWZ:LAAV:6Z4P:2WAI:BPD3
# pvs
PV VG Fmt Attr PSize PFree
/dev/vda2 atomicos lvm2 a-- 5.70g 0
/dev/vdb1 test-disk lvm2 a-- 4.00g 0
/dev/vdc1 test-disk lvm2 a-- 4.00g 0
/dev/vdd1 test-disk lvm2 a-- 4.00g 164.00m
# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
docker-pool atomicos twi-a-tz-- 2.76g 0.40 0.59
root atomicos -wi-ao---- 2.93g
docker-data test-disk -wi-ao---- 11.81g
docker-meta test-disk -wi-ao---- 16.00m
# vgs
VG #PV #LV #SN Attr VSize VFree
atomicos 1 2 0 wz--n- 5.70g 0
test-disk 3 2 0 wz--n- 11.99g 164.00m
</code></pre>
<br />
That's it. Enjoy your new disk space!<br />
<div>
<br /></div>
Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-2177586379122489026.post-23121317237042706202015-05-06T21:16:00.001-05:002015-05-06T21:18:23.660-05:00How to Contribute to the "Container Best Practices Guide"Hey there. We are starting a new best practices guide for containers! We'll cover tips and tricks for running containers on <a href="https://getfedora.org/" target="_blank">Fedora</a> (<a href="https://coreos.com/blog/rocket/" target="_blank">rkt</a> or <a href="https://www.docker.com/" target="_blank">Docker</a>), <a href="https://www.centos.org/" target="_blank">CentOS</a>, <a href="http://www.redhat.com/en/technologies/linux-platforms/enterprise-linux" target="_blank">Red Hat Enterprise Linux</a> and <a href="http://www.projectatomic.io/" target="_blank">Atomic</a>. Some of the topics will cover items from how to build single app containers running on a single host to building containers with the intention of orchestrating them across multiple hosts with a higher level tool like OpenShift and or Kubernetes.<br />
<br />
Right now we are just getting started with this consolidation of container knowledge effort. Please feel free to have a look at the <a href="https://github.com/projectatomic/container-best-practices" target="_blank">Github repo</a> and contribute by submitting a pull request. The guide will be written in asciidoc so it's going to be very easy to contribute to. There are three ways to render the asciidoc files into PDF or HTML format:<br />
<br />
<ul>
<li>Install the appropriate packages (git asciidoc dockbook-xsl fop make) on your Fedora host</li>
<li>Build your own <a href="https://github.com/fedora-cloud/Fedora-Dockerfiles/tree/master/container-best-practices" target="_blank">container-best-practices</a> (click the link to get the Dockerfile) image and do the processing inside the container</li>
<li>Pull the trusted image from the <a href="https://registry.hub.docker.com/u/fedora/container-best-practices/" target="_blank">Fedora account</a> on the Docker registry by issuing a "docker pull fedora/container-best-practices"</li>
</ul>
<a name='more'></a>If you choose to go the container route, here's a quick start, it's easy:<br />
<br />
1. To render a new PDF or HTML file from the trusted image on the Docker registry, issue the following command:<br />
<br />
<pre style="background-color: #eeeeee; border: 1px dashed #999999; color: black; font-family: Andale Mono, Lucida Console, Monaco, fixed, monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"> <code style="color: black; word-wrap: normal;">
docker run --privileged -dt -v /path/to/cloned/container-best-practices-repo/:/workdir fedora/container-best-practices make
</code></pre>
<br />
2. To clean up the directory and start over, you need to run the same command, except you will add a "make clean" on the end:<br />
<br />
<pre style="background-color: #eeeeee; border: 1px dashed #999999; color: black; font-family: Andale Mono, Lucida Console, Monaco, fixed, monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"> <code style="color: black; word-wrap: normal;">
docker run --privileged -dt -v /path/to/cloned/container-best-practice-repo/:/workdir fedora/container-best-practices make clean
</code></pre>
<br />
<br />
In fact, to make it even easier, I have created this video to help get you started.<br />
<br />
<iframe allowfullscreen="" frameborder="0" height="315" src="https://www.youtube.com/embed/kAr6nuOdRes" width="560"></iframe>
<br />
As mentioned in the video, please stop by #atomic or #fedora-cloud on Freenode if you have any issues or need some guidance. We look forward to collaborating with you. Thanks for stopping by.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-2177586379122489026.post-35031036290804240632015-03-18T20:05:00.001-05:002015-03-18T20:05:03.292-05:00Syntax highlighting for asciidoc Cool tip to track here. <br />
<br />
<a href="http://www.methods.co.nz/asciidoc/userguide.html#_vim_syntax_highlighter">http://www.methods.co.nz/asciidoc/userguide.html#_vim_syntax_highlighter</a><br />
<br />
<br />
<div class="sect1" style="-webkit-text-stroke-width: 0px; background-color: white; color: black; font-family: Georgia, serif; font-size: medium; font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; line-height: normal; orphans: auto; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; widows: auto; word-spacing: 0px;">
</div>
<br />
<div class="sect1" style="-webkit-text-stroke-width: 0px; background-color: white; color: black; font-family: Georgia, serif; font-size: medium; font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; line-height: normal; orphans: auto; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; widows: auto; word-spacing: 0px;">
<div class="sectionbody" style="margin-left: 0px;">
<div class="paragraph">
<div style="margin-bottom: 0.5em; margin-top: 0.5em;">
To enable syntax highlighing:</div>
</div>
<div class="ulist">
<ul style="list-style-position: outside; margin-top: 0px;">
<li style="color: #aaaaaa;"><div style="color: black; margin-bottom: 0.5em; margin-top: 0px;">
Put a Vim <em style="color: navy; font-style: italic;">autocmd</em> in your Vim configuration file (see the <a href="http://www.methods.co.nz/asciidoc/userguide.html#X61" style="color: magenta; text-decoration: underline;">example vimrc file</a>).</div>
</li>
<li style="color: #aaaaaa;"><div style="color: black; margin-bottom: 0.5em; margin-top: 0px;">
or execute the Vim command <code style="color: navy; font-family: 'Courier New', Courier, monospace; font-size: inherit; margin: 0px; padding: 0px;">:set syntax=asciidoc</code>.</div>
</li>
<li style="color: #aaaaaa;"><div style="color: black; margin-bottom: 0.5em; margin-top: 0px;">
or add the following line to the end of you AsciiDoc source files:</div>
<div class="literalblock" style="color: black; margin-bottom: 1.5em; margin-top: 1em;">
<div class="content" style="padding: 0px;">
<pre style="color: navy; font-family: 'Courier New', Courier, monospace; font-size: inherit; margin: 0px; padding: 0px; white-space: pre-wrap;"><code style="color: navy; font-family: 'Courier New', Courier, monospace; font-size: inherit; margin: 0px; padding: 0px;">// vim: set syntax=asciidoc:</code></pre>
</div>
</div>
</li>
</ul>
</div>
</div>
</div>
Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-2177586379122489026.post-64236684025411833662015-01-13T20:24:00.000-06:002015-06-21T18:50:42.935-05:00Flannel and Docker on Fedora - Getting StartedLets set up 3 Fedora servers for the purposes of testing flannel on Fedora. These can be bare metal, VMs (on KVM, VMware, RHEV, etc...). Why do we want to test this? This is to demonstrate setting up the flannel overlay network and confirming connectivity. Specifically, I want to test container connectivity across hosts. I'd like to make sure that container A on host A can talk to container B on host B. I received quite a bit of guidance from Jeremy Eder of <a href="http://breakage.org/">breakage.org</a> - Thanks for the tips!<br />
<br />
Our 3 Flannel hosts:<br />
<br />
fed-master 192.168.121.105<br />
fed-minion1 192.168.121.166<br />
fed-minion2 192.168.121.108<br />
<br />
A few setup notes: I haven't looked at this on GCE or AWS. It helps to add the hosts to <i>
/etc/hosts</i>, or have some other DNS solution. In my case, I set up these
VM's in Vagrant on my laptop and modified <i>/etc/hosts</i>. <br />
<br />
Software used on these Fedora hosts.
<br />
<pre style="background-color: #eeeeee; border: 1px dashed #999999; color: black; font-family: Andale Mono, Lucida Console, Monaco, fixed, monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"> <code style="color: black; word-wrap: normal;">
# rpm -qa | egrep "etc|docker|flannel"
flannel-0.2.0-1.fc21.x86_64
docker-io-1.4.0-1.fc21.x86_64
etcd-0.4.6-6.fc21.x86_64
</code></pre>
<br />
<br />
<b>On fed-master:</b><br />
<ul>
</ul>
Look at networking before flannel configuration.<br />
<pre style="background-color: #eeeeee; border: 1px dashed #999999; color: black; font-family: Andale Mono, Lucida Console, Monaco, fixed, monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"> <code style="color: black; word-wrap: normal;">
# ip a
</code></pre>
<br />
<ul>
</ul>
Start etcd on fed-master.<br />
<pre style="background-color: #eeeeee; border: 1px dashed #999999; color: black; font-family: Andale Mono, Lucida Console, Monaco, fixed, monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"> <code style="color: black; word-wrap: normal;">
# systemctl start etcd; systemctl status etcd
</code></pre>
<br />
<a name='more'></a><br />
<ul>
</ul>
Configure Flannel by creating a <i>flannel-config.json</i> in your current directory. The contents should be:<br />
<pre style="background-color: #eeeeee; border: 1px dashed #999999; color: black; font-family: Andale Mono, Lucida Console, Monaco, fixed, monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"> <code style="color: black; word-wrap: normal;">
{
"Network": "10.0.0.0/16",
"SubnetLen": 24,
"Backend": {
"Type": "vxlan",
"VNI": 1
}
}
</code></pre>
<br />
<ul>
</ul>
Upload the configuration to the <i>etcd</i> server.<br />
<pre style="background-color: #eeeeee; border: 1px dashed #999999; color: black; font-family: Andale Mono, Lucida Console, Monaco, fixed, monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"> <code style="color: black; word-wrap: normal;">
# curl -L http://x.x.x.x:4001/v2/keys/coreos.com/network/config -XPUT --data-urlencode value@flannel-config.json
</code></pre>
<br />
<ul>
</ul>
Verify the key exists.<br />
<pre style="background-color: #eeeeee; border: 1px dashed #999999; color: black; font-family: Andale Mono, Lucida Console, Monaco, fixed, monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"> <code style="color: black; word-wrap: normal;">
# curl -L http://x.x.x.x:4001/v2/keys/coreos.com/network/config
</code></pre>
<br />
<ul>
</ul>
Backup the flannel configuration file.<br />
<pre style="background-color: #eeeeee; border: 1px dashed #999999; color: black; font-family: Andale Mono, Lucida Console, Monaco, fixed, monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"> <code style="color: black; word-wrap: normal;">
# cp /etc/sysconfig/flanneld{,.orig}
</code></pre>
<br />
<ul>
</ul>
Configure flannel, use your interface on your system. Mine is eth0.<br />
<pre style="background-color: #eeeeee; border: 1px dashed #999999; color: black; font-family: Andale Mono, Lucida Console, Monaco, fixed, monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"> <code style="color: black; word-wrap: normal;">
# sed -i 's/#FLANNEL_OPTIONS=""/FLANNEL_OPTIONS="eth0"/g' /etc/sysconfig/flanneld
</code></pre>
<br />
The /etc/sysconfig/flanneld should look like this (sub your IP for the FLANNEL_ETCD key).<br />
<pre style="background-color: #eeeeee; border: 1px dashed #999999; color: black; font-family: Andale Mono, Lucida Console, Monaco, fixed, monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"> <code style="color: black; word-wrap: normal;">
# grep -v ^\# /etc/sysconfig/flanneld
FLANNEL_ETCD="http://192.168.121.105:4001"
FLANNEL_ETCD_KEY="/coreos.com/network"
FLANNEL_OPTIONS="--iface=eth0"
</code></pre>
<br />
Start up the flanneld service.<br />
<pre style="background-color: #eeeeee; border: 1px dashed #999999; color: black; font-family: Andale Mono, Lucida Console, Monaco, fixed, monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"> <code style="color: black; word-wrap: normal;">
# systemctl restart flanneld
# systemctl status flanneld
</code></pre>
<br />
Check the interfaces on the host now. Notice there is now a flannel.1 interface.<br />
<pre style="background-color: #eeeeee; border: 1px dashed #999999; color: black; font-family: Andale Mono, Lucida Console, Monaco, fixed, monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"> <code style="color: black; word-wrap: normal;">
# ip a
</code></pre>
<br />
Now that fed-master is configured, let's configure the minions (fed-minion{1,2}).<br />
<br />
<b>From the minions:</b><br />
<ul>
</ul>
Use curl to check firewall settings from the minion to the master. We need to ensure connectivity to the <i>etcd</i> service.<br />
<pre style="background-color: #eeeeee; border: 1px dashed #999999; color: black; font-family: Andale Mono, Lucida Console, Monaco, fixed, monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"> <code style="color: black; word-wrap: normal;">
curl -L http://fed-master:4001/v2/keys/coreos.com/network/config
</code></pre>
<br />
<b>From the fed-master:</b><br />
<ul>
</ul>
Copy over flannel configuration to the minions.<br />
<pre style="background-color: #eeeeee; border: 1px dashed #999999; color: black; font-family: Andale Mono, Lucida Console, Monaco, fixed, monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"> <code style="color: black; word-wrap: normal;">
# for i in 1 2; do scp /etc/sysconfig/flanneld fed-minion$i:/etc/sysconfig/.; done
</code></pre>
<br />
From master, restart services on the minions.<br />
<pre style="background-color: #eeeeee; border: 1px dashed #999999; color: black; font-family: Andale Mono, Lucida Console, Monaco, fixed, monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"> <code style="color: black; word-wrap: normal;">
# for i in 1 2; do ssh root@fed-minion$i systemctl restart flanneld; done
# for i in 1 2; do ssh root@fed-minion$i systemctl enable flanneld; done
</code></pre>
<br />
From master, check the new interface on the minions.<br />
<pre style="background-color: #eeeeee; border: 1px dashed #999999; color: black; font-family: Andale Mono, Lucida Console, Monaco, fixed, monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"> <code style="color: black; word-wrap: normal;">
# for i in 1 2; do ssh root@fed-minion$i ip a l flannel.1; done
</code></pre>
<br />
From any node in the cluster, check the cluster members by issuing a query to etcd via curl. You should see that three servers have consumed subnets. You can associate those subnets to each server by the MAC address that is listed in the output.<br />
<pre style="background-color: #eeeeee; border: 1px dashed #999999; color: black; font-family: Andale Mono, Lucida Console, Monaco, fixed, monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"> <code style="color: black; word-wrap: normal;">
# curl -L http://fed-master:4001/v2/keys/coreos.com/network/subnets | python -mjson.tool
</code></pre>
<br />
From all nodes, review the /run/flannel/subnet.env file. This file was generated automatically by flannel.<br />
<pre style="background-color: #eeeeee; border: 1px dashed #999999; color: black; font-family: Andale Mono, Lucida Console, Monaco, fixed, monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"> <code style="color: black; word-wrap: normal;">
# cat /run/flannel/subnet.env
</code></pre>
<br />
Configure Docker:<br />
<br />
Configure the Docker daemon on each minion. The <i>/usr/lib/systemd/system/docker.service</i> unit file on each minion should look as follows, pay special attention to the items in bold. We are instructing systemd to import and read the <i>/run/flannel/subnet.env</i> file to set up the variables used in the ExecStart key below. Specifically setting the Docker bridge IP and the MTU for flannel.<br />
<br />
<pre style="background-color: #eeeeee; border: 1px dashed #999999; color: black; font-family: Andale Mono, Lucida Console, Monaco, fixed, monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"> <code style="color: black; word-wrap: normal;">
[Unit]
Description=Docker Application Container Engine
Documentation=http://docs.docker.com
After=network.target docker.socket
Requires=docker.socket
[Service]
Type=notify
<b>EnvironmentFile=-/run/flannel/subnet.env</b>
EnvironmentFile=-/etc/sysconfig/docker
EnvironmentFile=-/etc/sysconfig/docker-storage
ExecStart=/usr/bin/docker -d -H fd:// $OPTIONS $DOCKER_STORAGE_OPTIONS<b> --bip=${FLANNEL_SUBNET} --mtu=${FLANNEL_MTU}</b>
LimitNOFILE=1048576
LimitNPROC=1048576
[Install]
WantedBy=multi-user.target
</code></pre>
<br />
Remember to issue on each minion.<br />
<pre style="background-color: #eeeeee; border: 1px dashed #999999; color: black; font-family: Andale Mono, Lucida Console, Monaco, fixed, monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"> <code style="color: black; word-wrap: normal;">
systemctl daemon-reload
systemctl restart docker
systemctl enable docker
systemctl status docker
</code></pre>
<br />
Check the network on the minion. If Docker fails to load, or the flannel IP is not set correctly, reboot the system. A functioning configuration should look like the following; notice the docker0 and flannel.1 interfaces.<br />
<pre style="background-color: #eeeeee; border: 1px dashed #999999; color: black; font-family: Andale Mono, Lucida Console, Monaco, fixed, monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"> <code style="color: black; word-wrap: normal;">
# ip a
1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 52:54:00:15:9f:89 brd ff:ff:ff:ff:ff:ff
inet 192.168.121.166/24 brd 192.168.121.255 scope global dynamic eth0
valid_lft 3349sec preferred_lft 3349sec
inet6 fe80::5054:ff:fe15:9f89/64 scope link
valid_lft forever preferred_lft forever
3: flannel.1: mtu 1450 qdisc noqueue state UNKNOWN group default
link/ether 82:73:b8:b2:2b:fe brd ff:ff:ff:ff:ff:ff
inet 10.0.81.0/16 scope global flannel.1
valid_lft forever preferred_lft forever
inet6 fe80::8073:b8ff:feb2:2bfe/64 scope link
valid_lft forever preferred_lft forever
4: docker0: mtu 1500 qdisc noqueue state DOWN group default
link/ether 56:84:7a:fe:97:99 brd ff:ff:ff:ff:ff:ff
inet 10.0.81.1/24 scope global docker0
valid_lft forever preferred_lft forever
</code></pre>
<br />
At this point the flannel cluster is set up and we can test it. We have etcd running on the fed-master node and flannel / Docker running on fed-minion{1,2} minions. Next steps are for testing cross-host container communication which will confirm that Docker and flannel are configured properly.<br />
<br />
From each minion, pull a Docker image for testing. In our case, we'll use fedora:20.<br />
<br />
Issue the following on fed-minion1.<br />
<pre style="background-color: #eeeeee; border: 1px dashed #999999; color: black; font-family: Andale Mono, Lucida Console, Monaco, fixed, monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"> <code style="color: black; word-wrap: normal;">
# docker run -it fedora:20 bash
</code></pre>
<br />
This will place you inside the container. Check the IP address.<br />
<pre style="background-color: #eeeeee; border: 1px dashed #999999; color: black; font-family: Andale Mono, Lucida Console, Monaco, fixed, monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"> <code style="color: black; word-wrap: normal;">
# ip a l eth0
5: eth0: mtu 1450 qdisc noqueue state UP group default
link/ether 02:42:0a:00:51:02 brd ff:ff:ff:ff:ff:ff
inet 10.0.81.2/24 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::42:aff:fe00:5102/64 scope link
valid_lft forever preferred_lft forever
</code></pre>
<br />
You can see here that the IP address is on the flannel network.<br />
<br />
Issue the following commands on fed-minion2:<br />
<pre style="background-color: #eeeeee; border: 1px dashed #999999; color: black; font-family: Andale Mono, Lucida Console, Monaco, fixed, monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"> <code style="color: black; word-wrap: normal;">
# docker run -it fedora:20 bash
# ip a l eth0
5: eth0: mtu 1450 qdisc noqueue state UP group default
link/ether 02:42:0a:00:45:02 brd ff:ff:ff:ff:ff:ff
inet 10.0.69.2/24 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::42:aff:fe00:4502/64 scope link
valid_lft forever preferred_lft forever
</code></pre>
<br />
Now, from the container running on fed-minion2, ping the container running on fed-minion1:<br />
<pre style="background-color: #eeeeee; border: 1px dashed #999999; color: black; font-family: Andale Mono, Lucida Console, Monaco, fixed, monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"> <code style="color: black; word-wrap: normal;">
# ping 10.0.81.2
PING 10.0.81.2 (10.0.81.2) 56(84) bytes of data.
64 bytes from 10.0.81.2: icmp_seq=2 ttl=62 time=2.93 ms
64 bytes from 10.0.81.2: icmp_seq=3 ttl=62 time=0.376 ms
64 bytes from 10.0.81.2: icmp_seq=4 ttl=62 time=0.306 ms
</code></pre>
<br />
You should have received a reply. That's it. flannel is set up on the two minions and you have cross host communication. Etcd is set up on the master node. Next step is to overlay the cluster with kubernetes.<br />
<br />
Important links:<br />
<br />
<a href="https://github.com/coreos/flannel" target="_blank">Flannel</a><br />
<a href="https://github.com/coreos/etcd" target="_blank">etcd </a><br />
<a href="https://getfedora.org/" target="_blank">Fedora</a><br />
<a href="http://www.breakage.org/" target="_blank">breakage </a><br />
<a href="https://www.docker.com/" target="_blank">Docker</a> Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-2177586379122489026.post-1729881086253835942014-11-11T17:53:00.002-06:002014-11-11T17:59:52.525-06:00RHEL Atomic Documentation - Get StartedLots of good stuff to get everyone started. Lots of options to get started.<br />
<br />
Getting Started with Atomic:<br />
<br />
<a href="https://access.redhat.com/articles/881893">https://access.redhat.com/articles/881893</a><br />
<br />
Atomic on RHEV:<br />
<br />
<a href="https://access.redhat.com/articles/rhel-atomic-install-rhev">https://access.redhat.com/articles/rhel-atomic-install-rhev</a><br />
<br />
Atomic on AWS:<br />
<br />
<a href="https://access.redhat.com/articles/rhel-atomic-install-aws">https://access.redhat.com/articles/rhel-atomic-install-aws</a><br />
<br />
<a name='more'></a><br />
Atomic on GCE:<br />
<br />
<a href="https://access.redhat.com/articles/rhel-atomic-install-gce">https://access.redhat.com/articles/rhel-atomic-install-gce</a><br />
<br />
Atomic PXE install Guide:<br />
<br />
<a href="https://access.redhat.com/articles/rhel-atomic-install-pxe">https://access.redhat.com/articles/rhel-atomic-install-pxe</a><br />
<br />
Atomic Kickstart Guide:<br />
<br />
<a href="https://access.redhat.com/articles/rhel-atomic-install-kickstart">https://access.redhat.com/articles/rhel-atomic-install-kickstart</a><br />
<br />
Atomic Anaconda Install Guide:<br />
<br />
<a href="https://access.redhat.com/articles/rhel-atomic-install-anaconda">https://access.redhat.com/articles/rhel-atomic-install-anaconda</a><br />
<br />
Atomic on VMware:<br />
<br />
<a href="https://access.redhat.com/articles/rhel-atomic-install-vmware">https://access.redhat.com/articles/rhel-atomic-install-vmware</a><br />
<br />
#vmware #google #docker #anaconda #linuxUnknownnoreply@blogger.com0tag:blogger.com,1999:blog-2177586379122489026.post-90578262391107445582014-11-01T18:41:00.001-05:002014-11-01T18:41:29.954-05:00Out of Space... Add a disk!I have a lenovo Thinkpad T540p. This laptop has a Samsung 256GB SSD drive. This drive ran out of space long ago. The good thing is, you can add a disk to the T540p. I went to lenovo's site and ordered:<br />
<br />
<ul>
<li><a href="http://shop.lenovo.com/SEUILibrary/controller/e/web/LenovoPortal/en_US/catalog.workflow:item.detail?hide_menu_area=true&GroupID=460&Code=0B47315&CID=EDM_NA_US_2014_ORDACKNLG" style="color: #009dd9; font-size: 12px; text-decoration: none;" target="_blank">ThinkPad 9.5mm SATA Hard Drive Bay Adapter IV</a></li>
<li><a href="http://shop.lenovo.com/SEUILibrary/controller/e/web/LenovoPortal/en_US/catalog.workflow:item.detail?hide_menu_area=true&GroupID=460&Code=0B47322&CID=EDM_NA_US_2014_ORDACKNLG" style="color: #009dd9; font-size: 12px; text-decoration: none;" target="_blank">ThinkPad 500GB 7200rpm 7mm SATA3 Hard Drive</a> </li>
</ul>
After installing the drive, I had to figure out how to carve up the new space. There are probably many ways to do this, but I knew I wanted to:<br />
<ul>
<li>Add more space to the /dev/fedora/root LV.</li>
<li>Keep the new, slower disk in a separate VG. I didn't want any LV's to span these two disks.</li>
</ul>
<a name='more'></a>I decided to move the _home_ VG to the new disk. That would free up ~ 200GB on the SSD drive. I could then take that free space and grow the root LV. I couldn't find a way to just move the LV from one disk to another when the disks are in a different VG. Seems like you have to move the LV from one disk to another when the drives are in the same VG. So, in short, what I needed to do was:<br />
<ul>
<li>Add the new disk to the existing VG</li>
<li>Move the home data to the new disk</li>
<li>Split that disk out into it's own VG</li>
<li>Expand the root filesystem</li>
</ul>
So, Here are the steps I ended up taking: <br />
<ul>
<li>fdisk /dev/sdb # create a partition on the new disk </li>
<li>pvcreate /dev/sdb1 # create a new phyiscal volume from the new disk </li>
<li>vgextend fedora /dev/sdb1 # add the new physical volume to the existing Volume group that home is in </li>
<li>pvdisplay -m # get list of extents for the volume group </li>
<li>pvmove -v /dev/sda2:7500-57499 /dev/sdb1 # move home to new disk
lvchange -an /dev/fedora/home # deactivate home </li>
<li>vgsplit fedora VG_Home /dev/sdb1 # from fedora (old volume group), create a new volume group called VG_Home </li>
<li>lvchange -ay /dev/VG_Home/home # activate the volume </li>
<li>mount /dev/VG_Home/home /home/ # test and see if data is still there </li>
<li>vi /etc/fstab # make new volume avail on boot </li>
<li>lvextend -L+20G fedora/root # add 20GB to my /root partition </li>
<li>resize2fs /dev/fedora/root # resize root and add the extra space </li>
<li>df -hal # confirm space is there </li>
</ul>
And, wallah! Now I have it set up the way I want. <br />
Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-2177586379122489026.post-43481174920514740542014-09-12T19:18:00.000-05:002014-09-13T13:37:40.981-05:00Simple git rebase exampleSituation:<br />
<br />
So I forked the <a href="https://github.com/GoogleCloudPlatform/kubernetes">Google Kubernetes</a> project. Then I created a fedora_gs_guide branch. I made some changes to the getting started guide and then I submitted a pull request. I asked someone to review it and they had a couple of changes. So, I make the changes, and commit. Now when I look at the PR, I see multiple commits. I think it's best practice to squash all those commits into one if possible. So, how do you do that. Well, here's how I did it. I'm sure there are other ways to do this, probably a lot more efficiently (comments welcome). But, it worked. These are my notes from the process.<br />
<br />
I needed to make sure my master and fedora_gs_guide branch were clean and rebased to upstream master.<br />
<br />
Make sure I'm on my local master.<br />
<br />
$ git checkout master <br />
<br />
<a name='more'></a><br /><br />
Add a remote branch: <br />
<br />
$ git remote add upstream https://github.com/GoogleCloudPlatform/kubernetes.git<br />
<br />
Then fetch all updates from the 'upstream' branch:<br />
<br />
$ git fetch upstream master<br />
<br />
Since my master was behind the upstream master, I needed to update my copy on github:<br />
<br />
$ git push origin master<br />
<br />
Now my master is clean. So now I need to make sure my feature branch is caught up with master:<br />
<br />
$ git checkout fedora_gs_guide<br />
$ git merge master<br />
<br />
I push my current feature branch up to github to bring the remote branch up to date:<br />
<br />
$ git push origin fedora_gs_guide<br />
<br />
Now that I'm all merged and clean up, I can start the rebase. From what I understand, it's best to separate those two steps. So, I do an interactive rebase in order to squash the commits that I want. When you go into interactive mode like this, it will show you all the commits that are in that feature branch that are available for squashing. In order to squash a commit into the previous commit, you just replace the word "pick", with "squash" at the start of the line. <br />
<br />
$ git rebase -i upstream/master<br />
<br />
After squashing the commits, I need to push the new squashed commit to my feature branch.<br />
<br />
$ git push -ff origin fedora_gs_guide<br />
<br />
Now when I check the PR on github, I only see the one commit. Which is what I intended. Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-2177586379122489026.post-70262673311178666322014-07-30T17:10:00.002-05:002014-09-16T22:32:16.634-05:00Getting Started with Kubernetes / Docker on FedoraEDIT 9/16/2014 ***********************<br />
<br />
I have taken these instructions and put them on the kubernetes github repo:<br />
<br />
<a href="https://github.com/GoogleCloudPlatform/kubernetes/tree/master/docs/getting-started-guides/fedora" target="_blank">Kubernetes Gitub </a><br /><br />
End EDIT ******************<br />
<br />
<br />
These are my notes on how to get started evaluating a <a href="http://fedoraproject.org/" target="_blank">Fedora</a> / <a href="https://www.docker.com/" target="_blank">Docker</a> / <a href="https://github.com/GoogleCloudPlatform/kubernetes" target="_blank">kubernetes</a> environment. I'm going to start with two hosts. Both will run Fedora rawhide. The goal is to stand up both hosts with kubernetes / Docker and use kubernetes to orchestrate the deployment of a couple of simple applications. <a href="https://github.com/derekwaynecarr" target="_blank">Derek Carr</a> has already put together a great tutorial on getting a kubernetes environment up using vagrant. However, that process is quite automated and I need to set it all up from scratch.<br />
<br />
Install Fedora rawhide using the instructions from <a href="https://fedoraproject.org/wiki/Releases/Rawhide" target="_blank">here</a>. I just downloaded the boot.iso file and used KVM to deploy the Fedora rawhide hosts. My hosts names are: fed{1,2}.<br />
<br />
The kubernetes package provides four services: apiserver, controller, kubelet, proxy. These services are managed by systemd unit files. We will break the services up between the hosts. The first host, fed1, will be the kubernetes master. This host will run the apiserver and controller. The remaining host, fed2 will be minions and run kubelet, proxy and docker.<br />
<br />
This is all changing rapidly, so if you walk through this and see any errors or something that needs to be updated, please let me know via comments below. <br />
<br />
So let's get started.<br />
<br />
<a name='more'></a><br />
Hosts:<br />
fed1 = 10.x.x.241<br />
fed2 = 10.x.x.240<br />
<br />
Versions (Check the kubernetes / etcd version after installing the packages):<br />
<br />
<pre style="background-color: #eeeeee; border: 1px dashed #999999; color: black; font-family: Andale Mono, Lucida Console, Monaco, fixed, monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"> <code style="color: black; word-wrap: normal;">
# cat /etc/redhat-release
Fedora release 22 (Rawhide)
# rpm -q etcd kubernetes
etcd-0.4.5-11.fc22.x86_64
kubernetes-0-0.0.8.gitc78206d.fc22.x86_64
</code>
</pre>
<br />
1. Enable the copr repos on all hosts. <a href="http://blog.verbum.org/" target="_blank">Colin Walters</a> has already built the appropriate etcd / kubernetes packages for rawhide. You can see the copr repo <a href="http://copr.fedoraproject.org/coprs/walters/atomic-next/" target="_blank">here</a>.<br />
<br />
<pre style="background-color: #eeeeee; border: 1px dashed #999999; color: black; font-family: Andale Mono, Lucida Console, Monaco, fixed, monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"> <code style="color: black; word-wrap: normal;">
# yum -y install dnf dnf-plugins-core
# dnf copr enable walters/atomic-next
# yum repolist walters-atomic-next/x86_64
Loaded plugins: langpacks
repo id repo name status
walters-atomic-next/x86_64 Copr repo for atomic-next owned by walters 37
repolist: 37
</code>
</pre>
2. Install kubernetes on all hosts - fed{1,2}. This will also pull in <a href="https://github.com/coreos/etcd" target="_blank">etcd</a>.<br />
<pre style="background-color: #eeeeee; border: 1px dashed #999999; color: black; font-family: Andale Mono, Lucida Console, Monaco, fixed, monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"> <code style="color: black; word-wrap: normal;">
# yum -y install kubernetes
</code>
</pre>
3. Pick a host and explore the packages.
<br />
<pre style="background-color: #eeeeee; border: 1px dashed #999999; color: black; font-family: Andale Mono, Lucida Console, Monaco, fixed, monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"> <code style="color: black; word-wrap: normal;">
# rpm -qi kubernetes
# rpm -qc kubernetes
# rpm -ql kubernetes
# rpm -ql etcd
# rpm -qi etcd
</code>
</pre>
4. Configure fed1.
<br />
<br />
Export the etcd and kube master variables so the services know where to go.
<br />
<pre style="background-color: #eeeeee; border: 1px dashed #999999; color: black; font-family: Andale Mono, Lucida Console, Monaco, fixed, monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"> <code style="color: black; word-wrap: normal;">
# export KUBE_ETCD_SERVERS=10.x.x.241
# export KUBE_MASTER=10.x.x.241
</code>
</pre>
These are my services files for: apiserver, etcd and controller. They have been changed from what was distributed with the package.<br />
<br />
Copy these to /etc/systemd/systemd/. using the -Z to maintain proper SELinux context on them. We will change the files in /etc/systemd/system leaving the ones in /usr the same.
<br />
<pre style="background-color: #eeeeee; border: 1px dashed #999999; color: black; font-family: Andale Mono, Lucida Console, Monaco, fixed, monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"> <code style="color: black; word-wrap: normal;">
# cp -Z /usr/lib/systemd/system/kubernetes-apiserver.service /etc/systemd/system/.
# cp -Z /usr/lib/systemd/system/kubernetes-controller-manager.service /etc/systemd/system/.
# cp -Z /usr/lib/systemd/system/etcd.service /etc/systemd/system/.
</code>
</pre>
<pre style="background-color: #eeeeee; border: 1px dashed #999999; color: black; font-family: Andale Mono, Lucida Console, Monaco, fixed, monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"> <code style="color: black; word-wrap: normal;">
# cat /etc/systemd/system/kubernetes-apiserver.service
[Unit]
Description=Kubernetes API Server
[Service]
ExecStart=/usr/bin/kubernetes-apiserver --logtostderr=true -etcd_servers=http://localhost:4001 -address=127.0.0.1 -port=8080 -machines=10.x.x.240
[Install]
WantedBy=multi-user.target
# cat /etc/systemd/system/kubernetes-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
[Service]
ExecStart=/usr/bin/kubernetes-controller-manager --logtostderr=true --etcd_servers=$KUBE_ETC_SERVERS --master=$KUBE_MASTER
[Install]
WantedBy=multi-user.target
# cat /etc/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
[Service]
Type=simple
# etc logs to the journal directly, suppress double logging
StandardOutput=null
WorkingDirectory=/var/lib/etcd
ExecStart=/usr/bin/etcd
[Install]
WantedBy=multi-user.target
</code>
</pre>
Start the appropriate services on fed1.
<br />
<pre style="background-color: #eeeeee; border: 1px dashed #999999; color: black; font-family: Andale Mono, Lucida Console, Monaco, fixed, monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"> <code style="color: black; word-wrap: normal;">
# systemctl daemon-reload
# systemctl restart etcd
# systemctl status etcd
# systemctl enable etcd
# systemctl restart kubernetes-apiserver.service
# systemctl status kubernetes-apiserver.service
# systemctl enable kubernetes-apiserver.service
# systemctl restart kubernetes-controller-manager
# systemctl status kubernetes-controller-manager
# systemctl enable kubernetes-controller-manager
</code>
</pre>
Test etcd on the master (fed1) and make sure it's working.<br />
<pre style="background-color: #eeeeee; border: 1px dashed #999999; color: black; font-family: Andale Mono, Lucida Console, Monaco, fixed, monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"> <code style="color: black; word-wrap: normal;">
curl -L http://127.0.0.1:4001/v2/keys/mykey -XPUT -d value="this is awesome"
curl -L http://127.0.0.1:4001/v2/keys/mykey
curl -L http://127.0.0.1:4001/version
</code>
</pre>
I got those examples from the CoreOS <a href="https://github.com/coreos/etcd" target="_blank">github</a> page.<br />
<br />
Open up the ports for etcd and the kubernetes API server on the master (fed1).<br />
<pre style="background-color: #eeeeee; border: 1px dashed #999999; color: black; font-family: Andale Mono, Lucida Console, Monaco, fixed, monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"> <code style="color: black; word-wrap: normal;">
# firewall-cmd --permanent --zone=public --add-port=4001/tcp
# firewall-cmd --zone=public --add-port=4001/tcp
# firewall-cmd --permanent --zone=public --add-port=8080/tcp
# firewall-cmd --zone=public --add-port=8080/tcp
</code>
</pre>
Take a look at what ports the services are running on.
<br />
<pre style="background-color: #eeeeee; border: 1px dashed #999999; color: black; font-family: Andale Mono, Lucida Console, Monaco, fixed, monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"> <code style="color: black; word-wrap: normal;">
# netstat -tulnp
</code>
</pre>
5. Configure fed2<br />
<br />
These are my service files. They have been changed from what was distributed with the package.<br />
<br />
Copy the unit files to /etc/systemd/system/. and make edits there. Don't modify the unit files in /usr/lib/systemd/system/.<br />
<pre style="background-color: #eeeeee; border: 1px dashed #999999; color: black; font-family: Andale Mono, Lucida Console, Monaco, fixed, monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"> <code style="color: black; word-wrap: normal;">
# cp -Z /usr/lib/systemd/system/kubernetes-kubelet.service /etc/systemd/system/.
# cp -Z /usr/lib/systemd/system/kubernetes-proxy.service /etc/systemd/system/.
</code>
</pre>
<pre style="background-color: #eeeeee; border: 1px dashed #999999; color: black; font-family: Andale Mono, Lucida Console, Monaco, fixed, monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"> <code style="color: black; word-wrap: normal;">
# cat /etc/systemd/system/kubernetes-kubelet.service
[Unit]
Description=Kubernetes Kubelet
[Service]
ExecStart=/usr/bin/kubernetes-kubelet --logtostderr=true -etcd_servers=http://10.x.x.241:4001 -address=10.x.x.240 -hostname_override=10.x.x.240
[Install]
WantedBy=multi-user.target
# cat /etc/systemd/system/kubernetes-proxy.service
[Unit]
Description=Kubernetes Proxy
[Service]
ExecStart=/usr/bin/kubernetes-proxy --logtostderr=true -etcd_servers=http://10.x.x.241:4001
[Install]
WantedBy=multi-user.target
</code>
</pre>
Start the appropriate services on fed2.
<br />
<pre style="background-color: #eeeeee; border: 1px dashed #999999; color: black; font-family: Andale Mono, Lucida Console, Monaco, fixed, monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"> <code style="color: black; word-wrap: normal;">
# systemctl daemon-reload
# systemctl enable kubernetes-proxy.service
# systemctl restart kubernetes-proxy.service
# systemctl status kubernetes-proxy.service
# systemctl enable kubernetes-kubelet.service
# systemctl restart kubernetes-kubelet.service
# systemctl status kubernetes-kubelet.service
# systemctl restart docker
# systemctl status docker
# systemctl enable docker
</code>
</pre>
Take a look at what ports the services are running on. <br />
<pre style="background-color: #eeeeee; border: 1px dashed #999999; color: black; font-family: Andale Mono, Lucida Console, Monaco, fixed, monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"> <code style="color: black; word-wrap: normal;">
# netstat -tulnp
</code>
</pre>
Open up the port for the kubernetes kubelet server on the minion (fed2).<br />
<pre style="background-color: #eeeeee; border: 1px dashed #999999; color: black; font-family: Andale Mono, Lucida Console, Monaco, fixed, monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"> <code style="color: black; word-wrap: normal;">
# firewall-cmd --permanent --zone=public --add-port=10250/tcp
# firewall-cmd --zone=public --add-port=10250/tcp
</code>
</pre>
Now the two servers are set up to kick off a sample application. In this case, we'll deploy a web server to fed2. Start off by making a file in roots home directory on fed1 called apache.json that looks as such:
<br />
<pre style="background-color: #eeeeee; border: 1px dashed #999999; color: black; font-family: Andale Mono, Lucida Console, Monaco, fixed, monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"> <code style="color: black; word-wrap: normal;">
# cat apache.json
{
"id": "apache",
"desiredState": {
"manifest": {
"version": "v1beta1",
"id": "apache-1",
"containers": [{
"name": "master",
"image": "fedora/apache",
"ports": [{
"containerPort": 80,
"hostPort": 80
}]
}]
}
},
"labels": {
"name": "apache"
}
}
</code>
</pre>
This json file is describing the attributes of the application environment. For example, it is giving it an "id", "name", "ports", and "image". Since the fedora/apache images doesn't exist in our environment yet, it will be pulled down automatically as part of the deployment process. I have seen errors though where kubernetes was looking for a cached image. In that case I did a manual "docker pull fedora/apache" and that seemed to resolve.<br />
For more information about which options can go in the schema, check out the docs on the kubernetes <a href="https://raw.githubusercontent.com/GoogleCloudPlatform/kubernetes/master/api/doc/pod-schema.json" target="_blank">github page</a>. <br />
<br />
Now, deploy the fedora/apache image via the apache.json file.
<br />
<pre style="background-color: #eeeeee; border: 1px dashed #999999; color: black; font-family: Andale Mono, Lucida Console, Monaco, fixed, monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"> <code style="color: black; word-wrap: normal;">
# /usr/bin/kubernetes-kubecfg -c apache.json create pods
</code>
</pre>
You can monitor progress of the operations with these commands:<br />
On the master (fed1) -
<br />
<pre style="background-color: #eeeeee; border: 1px dashed #999999; color: black; font-family: Andale Mono, Lucida Console, Monaco, fixed, monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"> <code style="color: black; word-wrap: normal;">
# journalctl -f -xn -u kubernetes-apiserver -u etcd -u kubernetes-kubelet -u docker
</code>
</pre>
On the minion (fed2) -
<br />
<pre style="background-color: #eeeeee; border: 1px dashed #999999; color: black; font-family: Andale Mono, Lucida Console, Monaco, fixed, monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"> <code style="color: black; word-wrap: normal;">
# journalctl -f -xn -u kubernetes-kubelet.service -u kubernetes-proxy -u docker
</code>
</pre>
This is what a successful expected result should look like:<br />
<pre style="background-color: #eeeeee; border: 1px dashed #999999; color: black; font-family: Andale Mono, Lucida Console, Monaco, fixed, monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"> <code style="color: black; word-wrap: normal;">
# /usr/bin/kubernetes-kubecfg -c apache.json create pods
I0730 15:13:48.535653 27880 request.go:220] Waiting for completion of /operations/8
I0730 15:14:08.538052 27880 request.go:220] Waiting for completion of /operations/8
I0730 15:14:28.539936 27880 request.go:220] Waiting for completion of /operations/8
I0730 15:14:48.542192 27880 request.go:220] Waiting for completion of /operations/8
I0730 15:15:08.543649 27880 request.go:220] Waiting for completion of /operations/8
I0730 15:15:28.545475 27880 request.go:220] Waiting for completion of /operations/8
I0730 15:15:48.547008 27880 request.go:220] Waiting for completion of /operations/8
I0730 15:16:08.548512 27880 request.go:220] Waiting for completion of /operations/8
Name Image(s) Host Labels
---------- ---------- ---------- ----------
apache fedora/apache / name=apache
</code>
</pre>
After the pod is deployed, you can also list the pod.
<br />
<pre style="background-color: #eeeeee; border: 1px dashed #999999; color: black; font-family: Andale Mono, Lucida Console, Monaco, fixed, monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"> <code style="color: black; word-wrap: normal;">
# /usr/bin/kubernetes-kubecfg list pods
Name Image(s) Host Labels
---------- ---------- ---------- ----------
apache fedora/apache 10.x.x.240/ name=apache
redis-master-2 dockerfile/redis 10.x.x.240/ name=redis-master
</code>
</pre>
You can get even more information about the pod like this.
<br />
<pre style="background-color: #eeeeee; border: 1px dashed #999999; color: black; font-family: Andale Mono, Lucida Console, Monaco, fixed, monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"> <code style="color: black; word-wrap: normal;">
# /usr/bin/kubernetes-kubecfg -json get pods/apache
</code>
</pre>
Finally, on the minion (fed2), check that the service is available, running, and functioning.
<br />
<pre style="background-color: #eeeeee; border: 1px dashed #999999; color: black; font-family: Andale Mono, Lucida Console, Monaco, fixed, monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"> <code style="color: black; word-wrap: normal;">
# docker images | grep fedora
fedora/apache latest 6927a389deb6 10 weeks ago 450.6 MB
# docker ps -l
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d5871fc9af31 fedora/apache:latest /run-apache.sh 9 minutes ago Up 9 minutes k8s--master--apache--8d060183
# curl http://localhost
Apache
</code>
</pre>
To delete the container.
<br />
<pre style="background-color: #eeeeee; border: 1px dashed #999999; color: black; font-family: Andale Mono, Lucida Console, Monaco, fixed, monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"> <code style="color: black; word-wrap: normal;">
/usr/bin/kubernetes-kubecfg -h http://127.0.0.1:8080 delete /pods/apache
</code>
</pre>
That's it.<br />
<br />
Of course this just scratches the surface. I recommend you head off to the kubernetes github page and follow the <a href="https://github.com/GoogleCloudPlatform/kubernetes/tree/master/examples/guestbook" target="_blank">guestbook</a> example. It's a bit more complicated but should expose you to more functionality.<br />
<br />
You can play around with other Fedora images by building from Fedora Dockerfiles. Check <a href="https://github.com/fedora-cloud/Fedora-Dockerfiles" target="_blank">here</a> at Github.
Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-2177586379122489026.post-6649798322235294192014-07-03T14:36:00.000-05:002014-07-18T20:50:47.940-05:00Getting Started with goI have been following the progress of <a href="http://www.jasondotstar.com/" target="_blank">Jason</a> and his 180 day coding challenge. I'm going to try something similar except that I'm going to work on go. The only problem is, that I can't start for 10 days because of some PTO that I have to take starting tomorrow. Having said that, I'm throwing down the gauntlet now and when I get back, I'll post everyday on my progress. A few rules, per Jason's post above - yes, I did steal these directly from him with one change on the first rule:<br />
<br />
<ol>
<li>Every business day for a minimum of 30 minutes, I must write code or learn about the tool-chain used in the development process. Documentation about the code does not count.</li>
<li>The resulting code must be useful, or it should be code that points
towards something that eventually will be. No tweaking indentation, no
code re-formatting, and if at all possible no re-factoring. (All these
things are permitted, but not as the exclusive work of the day.)
Tutorials and working through code examples as a means to learn are
allowed.</li>
<li>All code must be written before midnight, and after 6AM. </li>
<li>The code must be Open Source and posted on Github.</li>
</ol>
My current status:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://1.bp.blogspot.com/-oRlHk8DiMxE/U7Wzs-x_DYI/AAAAAAAADZM/Frabe-xrEsA/s1600/github.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-oRlHk8DiMxE/U7Wzs-x_DYI/AAAAAAAADZM/Frabe-xrEsA/s1600/github.png" height="66" width="400" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
</div>
<br />
I have signed up for the Pluralsight on-line go class <a href="http://pluralsight.com/training/Courses/TableOfContents/go" target="_blank">here</a>. So far, I have made it to the "Variables, Types and Pointers" section. We just haven't written much code yet. So I won't count that.<br />
<br />
In addition, I have downloaded and installed <a href="http://www.jetbrains.com/idea/" target="_blank">IntelliJ</a> IDEA 13.1.3 IDE, community edition. The on-line class they use this so I figured I'd give it a try. It was relatively easy to set up, I should write a quick post on what I did for reference. I'm also going to evaluate the vim plug-ins out there for writing go more efficiently.<br />
<br />
<br />
So, When I get back I plan on kicking this off full steam ahead.<br />
<br />
<br />Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-2177586379122489026.post-18326870301565193932014-05-25T13:58:00.000-05:002014-05-25T13:58:06.247-05:00Quick VIM Macro ReferencePulled from: <a href="http://vim.wikia.com/wiki/Macros" target="_blank">Here</a><br />
<br />
<table class="cleartable"><tbody>
<tr><td><code>qd</code> </td><td> start recording to register <code>d</code>
</td></tr>
<tr>
<td> <code>...</code> </td><td> your complex series of commands
</td></tr>
<tr>
<td> <code>q</code> </td><td> stop recording
</td></tr>
<tr>
<td> <code>@d</code> </td><td> execute your macro
</td></tr>
<tr>
<td> <code>@@</code> </td><td> execute your macro again
</td></tr>
</tbody></table>
Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-2177586379122489026.post-24762030574089652692014-05-13T10:39:00.001-05:002014-05-13T10:42:24.845-05:00Moving Fedora Dockerfiles - It's Official!Today I moved my Fedora-Dockerfiles repo to the Fedora Cloud SIG github repo. All forks, stars, wiki pages, etc... were maintained during the transfer. This repo will still be the source for the fedora-dockerfiles package. The new location is at:<br />
<br />
<a href="https://github.com/fedora-cloud/Fedora-Dockerfiles">https://github.com/fedora-cloud/Fedora-Dockerfiles</a><br />
<br />
This makes it a bit more official. I will stay involved with the maintenance of the Fedora Dockerfiles moving forward. Also, please have a look at some of the work going on in the <a href="https://fedoraproject.org/wiki/Cloud_SIG" target="_blank">Fedora Cloud SIG here</a>. There are always opportunities to help out. Exciting times!<br />
<br />
<br />Unknownnoreply@blogger.com0