Pages

Wednesday, April 1, 2020

DNS on AWS / GCP

I have a zone that's hosted by AWS route53 called sysdeseng.com. My goal was to create a few machines on GCP and have them resolve to a delegated subdomain.  For example:

test.scollier-gcp.sysdeseng.com

These are the steps required to do this:

1. Create the zone on "GCPs Network Services", "Cloud DNS"
  • Give it a Zone name
  • Give it a DNS name: scollier-gcp.sysdeseng.com
  • Provide a description
  • Click Create
  • Note the name records, for example:
    • ns-cloud-a1.googledomains.com.
    • ns-cloud-a2.googledomains.com.
    • ns-cloud-a3.googledomains.com.
    • ns-cloud-a4.googledomains.com. 
2. Go to AWS route53 and create a NS record for this zone under the sysdeseng.com domain.
  • Click on the sysdeseng.com zone in route53
  • Create a record set
    • On the right hand side, provide the name: scollier-gcp
    • Change the type to NS
    • Copy the nameservers from GCP and paste into the NS record.
    • Click Create
3. Create the A record on GCP
  • Return to GCP
    • Go to "VPC Network", then "External IP Addresses"
      • Create an external IP address, note it
    • Go back to "GCPs Network Services", "Cloud DNS" and click the zone
    • Add a record set
      • Give it a DNS Name
      • Provide the external IP address 
      • Click Create
4. Test that it works
  • Go to Linux terminal
$ dig +short testing.scollier-gcp.sysdeseng.com.
   34.67.155.244

$ dig +short SOA scollier-gcp.sysdeseng.com
ns-cloud-a1.googledomains.com. cloud-dns-hostmaster.google.com. 1 21600 3600 259200 300

$ dig +short SOA sysdeseng.com
ns-679.awsdns-20.net. awsdns-hostmaster.amazon.com. 1 7200 900 1209600 86400

Thursday, June 1, 2017

Kicking the tires of Prometheus using Docker on Fedora

Straight from the Prometheus documentation: "Prometheus is an open-source systems monitoring and alerting toolkit originally built at SoundCloud."

I haven't had a chance to even take a look at using Prometheus.  Here I'll go over the steps I had to follow to get a working local Prometheus install monitoring my local Docker daemon so I could see metrics through the Prometheus dashboard.

First things first, here are the versions of what I am using (eh, until we find out what the problem is (listed below)):
  • Fedora 25
  • Docker
    • docker-1.12.6-6.gitae7d637.fc25.x86_64
    • docker-common-1.12.6-6.gitae7d637.fc25.x86_64
    • docker-latest-1.12.6-2.git51ef5a8.fc25.x86_64
    • Prometheus
      • prom/prometheus b0195cb1a666
    So, there were a couple of places I went for documentation to get started:

    Prometheus

    Docker

    So, following those docs, I tried to use the default Fedora Docker configuration.  That did not work.  The Docker documentation was off, at least for the version of Docker I am using.  By default, in Fedora, you get a Docker package that is a bit out of date.  Here are the steps I took and what I had to do as a workaround.

    Saturday, February 4, 2017

    Testing OpenShift on Openstack using Snapshots


    The goal here is to allow me to test out OpenShift Container Platform on top of Red Hat OpenStack Platform.  I want to be able to build and tear down the environment quickly so I can check out different configurations.  OpenStack provides a way for me to do this via snapshots.

    The first thing I did was upload a RHEL 7 image.  Then I booted and configured two servers from that image:
    • Bastion Host
    • Master-Infra-AppNode
    To configure these servers, I followed the Red Hat Reference Architecture Red Hat OpenShift Container Platform 3 on Red Hat OpenStack Platform 8 up to page 47, right before deploying OpenShift Container Platform.  This allowed me to update the servers, configure the interfaces, sudo access, etc... Here are what my servers look like:

    
    $ nova list
    +--------------------------------------+---------------------------+---------+------------+-------------+------------------------------------------------------------------------+
    | ID                                   | Name                      | Status  | Task State | Power State | Networks                                                               |
    +--------------------------------------+---------------------------+---------+------------+-------------+------------------------------------------------------------------------+
    | 82a42602-030f-4137-94bb-bac5f275dc1b | bastion-gold              | SHUTOFF | -          | Shutdown    | tenant-network=172.18.20.13; control-network=192.168.x.6, 10.19.x.80 |
    | 17a505d0-9252-4a65-a0c8-196f6f25e605 | master-infra-appnode-gold | SHUTOFF | -          | Shutdown    | tenant-network=172.18.20.4; control-network=192.168.x.5, 10.19.x.53  |
    +--------------------------------------+---------------------------+---------+------------+-------------+------------------------------------------------------------------------+
    

    After the servers were configured, I shut them down and created an image from each of those servers called  "bastion-gold" image, and "master-infra-appnode-gold" image.  This will allow for me to create my OpenShift Container Platform environment from these images.  The steps I followed to create the snapshots are:

    $ openstack server list
    $ nova image-create --poll master-infra-appnode-gold sc-master-0.rhops.eng.x.x.redhat.com-snap $ nova image-create --poll master-infra-appnode-gold sc-node-0.rhops.eng.x.x.redhat.com-snap $ nova image-create --poll master-infra-appnode-gold sc-node-1.rhops.eng.x.x.redhat.com-snap $ nova image-create --poll bastion-gold sc-bastion.rhops.eng.x.x.redhat.com-snap

    Wednesday, August 31, 2016

    OpenShift Cluster Up on Fedora


    Looking for a quick way to get an OpenShift Origin instance up and running quickly on your local laptop?  Look no further. 'oc cluster up' is here.  Check out the documentation here which points you here for the actual client bits.  Let's get started.

    A quick scan of the environment before running 'oc cluster up' so I know what I'm getting.

    $ cat /etc/fedora-release
    Fedora release 24 (Twenty Four)
    
    $ docker --version
    Docker version 1.10.3, build 1ecb834/1.10.3
    
    $ docker ps
    CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
    
    $ docker images
    
    REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
    

    Grab the latest client, untar it, change into the proper directory and get the version.

    $ wget https://github.com/openshift/origin/releases/download/v1.3.0-alpha.3/openshift-origin-client-tools-v1.3.0-alpha.3-7998ae4-linux-64bit.tar.gz
    
    $ tar xzvf openshift-origin-client-tools-v1.3.0-alpha.3-7998ae4-linux-64bit.tar.gz
    
    $ cd openshift-origin-client-tools-v1.3.0-alpha.3-7998ae4-linux-64bit/
    
    $ ./oc version
    oc v1.3.0-alpha.3
    kubernetes v1.3.0+507d3a7
    features: Basic-Auth GSSAPI Kerberos SPNEGO
    

    Start the cluster.

    Friday, August 5, 2016

    Fedora Flock - 2016 - Day 4 - Last Day

    Day 3, last day, today only has 2 sessions that I was going to attend.  I started out by going to an "Ansible best practices Working Session" by Michael Scherer.  The goal was to cover Ansible basics, best practices, and how they apply those to the Fedora Infrastructure.  One example he used was checking the checksums on files before you replace them and restart services with Ansible.  In particular, ssh config files.  You can imagine if you restart ssh on your clusters across datacenters and you break ssh... no more Ansible.  Another best practice is to leverage the pkg module which can determine which package manager is being used by the host and adjust accordingly.  The third best practice was to be careful on how you assign variables.  Try to use local when possible.The Fedora Infrastructure team keeps thier ansible code here  Michael spent quite a bit of time covering the organization of their playbooks, roles, groups, tasks, handlers.  A couple of tools that might be helpful:
    Michael also walked through a playbook in detail with the upstream maintainer of the mailman package.  There were quite a few best practices thrown around for that one.

    After the session Michael and I had a chance to sit down and talk about the OpenShift Origin deployment for the cloud working group.   Some decisions need to be made:
    • Deploy Origin containers on Fedora Atomic or Fedora 24
    • Set some expectations that this may be redeployed while we are learning?
    • Bare metal or OpenStack?
    • Storage for registry?  Swift (not in Fedora?)  NFS (hope not)
    • Architecture: What do we want to deploy?  Doesn't have to be production quality.
    So that's the great thing about coming to a conference like this.  Get a chance to put some faces to names and talk about fun, important projects.  Flock is now over and my overall impression is that this conference was run very well.  Lots of activities, food was great, people were great, sessions were great.  I'm looking forward to my next Flock.


    Thursday, August 4, 2016

    Fedora Flock - 2016 - Day 3

    Third day!  Before I get started on my session logging today, check out the picture of all the attendees at flock that we took last night before the cruise of Krakow.


    Today we started with lightening talks for an hour.  I was second up and presented +OpenShift on +Fedora Project.  That was my first time presenting a lightening talk and my first time attending other lightening talks.  I really like the format for both.  You'd be surprised at how much material you can cover in 5 minutes.

    Today is also hack session day.  The sessions I am attending are "Building a Fedora Containers Library", "OpenShift on Fedora", and "Fedora PRD Workshop".  The sessions are two hours each.

    +Josh Berkus kicked off "Building a Fedora Containers Library" with a slide that had instructions to git clone the lab material.  That's the proper way to start a workshop :). Josh walked us through building a +PostgreSQL image step by step with lots of best practices discussed along the way.  This session was particularly insightful because Josh is well... extremely knowledable on PostgreSQL.  That knowledge coupled with his Docker chops translated into an outstanding session. Great hack session.

    Next was "OpenShift on Fedora", a hack session led by Maciej Szulik.  The material for the lab is located here.  We started out by leveraging vagrant to spin up an environment that we could issue an "oc cluster up" in, which spins up everything you need to get started.  The lab consisted of deploying pods, exploring pods, services, replication controllers, etc.  Maciej did a great job explaining some concepts in OpenShift that I wasn't really getting.  Such as deployment configs, image streams, horizontal scaling.  I didn't quite finish with the lab but the good thing is you can take it with you in the Vagrant box and the material on github.  Great hack session.

    Wednesday, August 3, 2016

    Fedora Flock - 2016 - Day 2

    Day 2 starts soon.  Again, this will be high level notes from each session that I attend.  I'm quite sure that I won't capture everything.  Head here for my Day 1 notes.

    Today started out with "Continuous Integration and the Glorious Future".  Tim kicked it off with some CI history - dev, dev, dev, then integration.  That didn't work to well. Provided some nice perspective that I hadn't had before.  Tim also provided a current state of the union with Fedora automation and items that are in progress including build automations, build self-tests, and automated deployments.  Some of the items that need work are presentation of data and results, keeping the builds fast.  More great perspective on the feedback loop and what he wants out of it: how long after package is updated can a new compose be generated, how long after compose is built until the tests are run.  How long after the tests are run untl the developer is notified of success or failure.  The QA team is also evaluating how to enable contributors to write thier own automated tests.  Nonstop Fedora.  Tim covered quite a bit more on the Why and How during his presentation. Great presentation.

    Next up was "Modularity: Why, where we are, and how to get involved" by +Langdon White.  Langdon kicked off by covering some history which dated back to the "Rings Proposal".  starting from "JeOS" which would be highly curated to the outer rings which are no so curated.  He provided some great analogies about how a one size doesn't fit all - comparing to the lifecycle of packages and how they don't align with other packages.  Then he moved into modules:

    • A module is a thing that's managed as a logical unit.
    • A module is a thing that promises an external, unchanging API
    • A module is a thing that may have many, unexposed binary artifacts to support the external API
    • A module may "contain" other modules and is referred to as a "module stack"

    The process: inputs -> activities -> outputs -> outcomes -> impact.

    We saw an example of a module input file which explained references, profiles, components, and filters.

    Progress thus far is an established Modularity WG, implemented a dnf plugin, implemented an alpha version of module build pipeline, ability to coalesce modules for testing, and kicked off a base-runtime.

    Tuesday, August 2, 2016

    Fedora Flock - 2016 - Day 1

    So this is my first +Fedora Project Flock conference. I arrived in Krakow yesterday from Austin Texas.  The folks who put Flock together did a great job with this event.  I have never been to Krakow before, and they clearly communicated how you get around, which buses / trains to take, how to buy tickets, everything.  Kudos to that team.  I had a few reasons to come to Flock, I wanted to put some faces to names that I have been working with over the years.  I wanted to meet with the members of the Fedora Cloud group that I have been participating in, and I wanted to attend technical sessions and see what's coming up in the distro.

    My schedule is listed here.  I'll blog each day that I'm here to share the experience.  Hopefully you will find it interesting enough to attend the next one if you ddn't get a chance to come to this Flock event.  I'll give an overview of each session that I attend.  I know I won't capture all the details from each session that I attend, but it's a taste.  The sessions are recoreded and will be posted to the Fedora youtube channel.

    Day 1. 

    Introduction from Joe B. to thank sponsors: Red Hat, Unix Stickers, SuSE, The Linux Foundation, stickermule.  Thanks sponsors!  Keep in mind though, Flock is a confernece that is run and led by contributors - for contributors.  I can tell there was a ton of work done behind the scenes to make this event happen.

    Then the keynote by +Matthew Miller.  Matt covered some of the numbers that show Fedora is gaining steam in the cloud and developer space, among many others.  He also talked about a few of the major goals for 2016.  It's cool to see that the +Fedora Project has some big plans to continue moving forward in the cloud space.  Think items like Fedora Atomic, OpenShift and Flatpak.

    Friday, May 27, 2016

    OpenShift Origin on Fedora 24 on AWS - Wow.

    So, this all started because I was just doing a little Friday tinkering and wanted to see how easy it is to get OpenShift Origin installed on Fedora... on AWS.  Well, it turns out, it's really, really easy.  So easy, in fact, that I decided to write it down here and share it with you. This will be the first of a few blog posts about running OpenShift Origin on Fedora.  This post details how to get OpenShift Origin running on a single instance of Fedora 24. This is also a manual configuration.  In future blog posts, I'll talk about how to set up a highly available OpenShift Origin install on Fedora.  In addition, I'll talk about how to consume AWS resources like ELBs, IAM, S3, route53, ec2 instances, etc...  Just maybe, I'll go into how to automate the deployments with the AWS CLI.  Feel free to leave some comments on just how far you want to go here.  I promise, it will be fun.

     I learned quite a bit during this process, namely:

    • You can easily find and use Fedora images in the AWS community AMIs.
    • OpenShift Origin has been packaged for Fedora 24 - who doesn't like new?
    • It's easy to install the OpenShift Origin PaaS and get started.
    The goal was to get Origin running on AWS, launch an application, and hit that app from my browser.  There's no real pre-requisites to get started here other than an AWS account with the proper permissions.  I do happen to have a DNS name managed by AWS route53 which helps a bit.  I also have some prior knowledge of how AWS works.

    Let's chat a bit about what I'm using, what I had set up before this, and what I had to do to meet my goal.  I am using:

    • Fedora AMI with the ID of ami-0a09e667 (Fedora-Cloud-Base-24-20160512.n.0.x86_64-us-east-1-HVM-standard-0). 
    • For my testing, I'm using a m4.2xlarge instance of that AMI.
    • I had an existing VPC that I launched the Fedora 24 instance into.  The only things to know about that is that I have DNS hostnames enabled on that VPC.
    • I have an existing subnet in that VPC that I launched this into.
    • I have an existing route table in that VPC with an internet gateway defined so my instance can get out.
    • I created a new security group on instance launch for testing this.

    I do need to prep AWS a bit before moving on.  I'll use the AWS CLI to do this.  I do have an AWS CLI cheat sheet that may help if you have questions about querying resources, launching resources, describing, etc.. Have a look.  To move forward, I need to know what OpenShift Origin needs.  I found that the OpenShift Origin documentation is great. Please have a look if you have any questions.  That's what I did. I went to the docs | installing | prerequisites and started there.  I'll just walk through the prerequisites here and share what I did.

    Thursday, May 26, 2016

    Testing out AWS ssm

    I was poking around the AWS CLI and testing out different features / functionality.  Amazons ssm caught my eye.  I decided to have a look at the remote functionality offered by this tool.  I'm consolidating all the notes I found in different resources here, to do a simple test.  Here's a high level overview of what it took me to get this configured and working properly:

    1. Create a role and policy and assign that to an EC2 instance at launch time. You can't assign it to a running instance. The policy I assigned to the role that I attached to the instance is called: AmazonEC2RoleforSSM

    2. Assign permissions to the user that will be executing the commands. The name of the policy is: AmazonSSMFullAccess

    Of course, for your environment, make sure you adhere to your security requirements.  There are better ways to restrict this.

    3. Deploy the instance and install the ssm agent.  You can either install the agent by passing  user-data or manually afterwards.  It's a a simple rpm package.

    4. Create a policy document, mine was:
           
    {
         "schemaVersion": "1.2",
         "description": "Check ip configuration of a Linux instance.",
         "parameters": {
         },
         "runtimeConfig": {
           "aws:runShellScript": {
           "properties": [
           {
           "id": "0.aws:runShellScript",
           "runCommand": 
        }
       ]
      }
     }
    }
    

    From the examples here: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/create-ssm-doc.html