Some Numbers

As I get into this performance thing with Mr Kevin. I thought I’d share a few numbers from early builds and what not. You know, the sort of “should we do this” thing kind of stuff.

These nubmers are for a 5 node build using the Vagrantfiles here:

NodeCPURam
Controller 111.5GB
Controller 212GB
Compute 122GB
Cinder 11.5GB
Chef11GB

The hardware involved is a late 2013 Retina MacBook Pro:
- 2.5 Ghz Core i5
- 8GB Ram
- SSD

Note, this environment will be similar, but not the only environment we build out for the OpenStack Summit and the Performance book. The theory being, a change or two here can be more quickly tested for positive or negative effect than rebuilding a full deployment.

Results

Below are the best results of 3 reach for each setup.

Some other things about this:
- I have no horse in the race for either platform or OS. Just running some vagrant-ups.
- This is done over 3 runs of each. Mostly not using the laptop. Basically, I did a /lot/ of reading during this time.
- These are pretty manila. That is, no caching, no performance hacks on Fusion or vBox.
- Virtualbox and my MacBook don’t get along under the load of the environment. Basically after hours of lost time, many hard crashes, and such, I’m going with “n/a” on performance numbers for vBox. At least for now.

VMware Fusion and Ubuntu 12.04

real    63m25.441s
user    2m30.644s
sys 0m24.843s

VMware Fusion and Centos 6.5

real    124m32.479s
user    3m5.562s
sys 0m28.180s

Virtualbox and Ubuntu 12.04

na

Virtualbox and Centos 6.5

na

Notes on Virtualbox:
During the testing for anything on virtualbox in this list, I encountered no small amount of hard crashes. For CentOS alone, about 15 hours was sunk. As I stated before, I don’t have a horse in the race between one or the other, but damn was this a challenge. If you can make it run and don’t mind helping, drop a comment with the output of time vagrant up --provider=virtualbox for this environment, and what hardware you’re running it on.

OSCON – Day 00 (or is it 1?)

Ok, so yesterday (two days ago now) started with “Day -1″ but mayhaps I’m just an idiot, who knows. So, today’s workshop? Well, it was pretty amazing. Lots of good questions, and the lab worked for most folks. We did have some questions come up around what CLI commands we were copying/pasting during the preso part. Those can be found here.

As a reminder, the materials (slides, and everything else for those playing along at home) can be found here.

Other things to note from the day:
- The Japanese Gardens in Portland are pretty amazing
- If you tell people you are Batman, they don’t think you’re too crazy. At least not in Portland.
- Converse Makes some decent shoes.
- So. Much. Batman.

https://pbs.twimg.com/media/Bs__s5xCcAA7Nmr.jpg

OSCON – Day -1 Getting Started

All kinds of checked into the hotel room. Speaker badge acquired, and scoped out the presentation room for tomorrow. If you haven’t already and are going to be here, you can download all of the materials here.

If your hote wifi sucks, we’ll have all of this on USB keys

https://pbs.twimg.com/media/Bs8YFXjCcAAecK5.jpg

https://pbs.twimg.com/media/Bs8Ui_YCEAA3JN7.jpg

https://pbs.twimg.com/media/Bs8MtJzCAAEMT2f.jpg

Other Day 1 Tidbits – Pivotal Yak Hoodie

The Yak hoodie instructions are teh broke. That said, @eglute helped me get it going. If the instructions on the paper don’t work for you, download the cf client here & try this:

$ cf target api.run.pivotal.io
FAILED
Incorrect Usage.
NAME:
   target - Set or view the targeted org or space
ALIAS:
   t
USAGE:
   cf target [-o ORG] [-s SPACE]
OPTIONS:
   -o   organization
   -s   space
$ cf login
API endpoint> api.run.pivotal.io

Also, after bundle install, cf push fails w/o a name.

$ cf push
FAILED
Error: App name is a required field
$ cf push lolpancakes
...snip...
App started
Showing health and status for app lolpancakes in org bunchc-org / space development as bunchc@gmail.com...
OK
requested state: started
instances: 1/1
usage: 1G x 1 instances
urls: lolpancakes.cfapps.io
     state     since                    cpu    memory        disk
#0   running   2014-07-19 05:41:16 PM   0.0%   22.8M of 1G   45.6M of 1G

Finally!

OSCON Lab Materials

tl;dr Download our OSCON lab materials here.

As a follow-up on my coming to OSCON, I thought it prudent to provide some info & downloads for the lab ahead of time.

Lab Materials

While we will have USB keys in the tutorial for everyone, we figure some of y’all might want to get started early. With that in mind, the lab materials can be downloaded here, but be aware, it’s about 4GB of stuff to download.

  • Slides – Both the PPT & PDF of the slides
  • openstackicehouse.ova – The vAPP we will use in the lab
  • OpenStack_command_guide_reference.pdf – A quick reference for OpenStack CLI commands
  • Access_virtualbox_allinone.pdf – A guide for accessing the lab
  • cirros-0.3.1-x86_64-disk.img – Used in the labs
  • Osco Solutions/ – All of the labs we will be doing
  • Couch to OpenStack/ – An additional 12 hours of Getting Started with OpenStack Material
  • VirtualBox/ – Contains the VirtualBox installer for OSX, Linux, and Windows

Really, you can get the materials here

Prerequisites

To be successful in the lab, there are a few things you will need. None of these are too complex or too deep, but having them will improve your experience overall.

  • A laptop with a minimum of 4GB free ram
  • VirtualBox or VMware Fusion/Workstation/Player installed
  • An SSH client. On Windows, Putty works well.

Some Random Statistics

Building the USB keys was an exercise in insanity. The setup looks kinda like this:
https://pbs.twimg.com/media/BstHHTaCMAACUTk.jpg

The fan was added after the first batch nearly melted the USB hub. The smell of burnt silicon was pretty intense.

  • Each key contains about 4GB of data.
  • We’re copying them 24 at a time and seeing:
    • 40 min to finish all 24 disks
    • 45MB/sec (Yes Megabytes) sustained transfer
    • 12,000 IOPS largely write

USB Key Duplication on OSX on the Cheap

Edit: As I got a bit deeper into the copies, a new method was needed.

Common

First, make an image of the usb disk in question. To do this, open Disk Utility, and then:

  1. Click File
  2. Click New
  3. Click “New Image From Folder…”
  4. Select your folder
  5. Wait

Next, find the image file in finder & mount it, record the place it was mounted.

Methodology 1

This is a variant of the work here.

Now that you’ve got the image and it’s mounted, plug in the USB hub containing your keys and run the following from your terminal:

diskutil list
$ diskutil list
/dev/disk0
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:      GUID_partition_scheme                        *251.0 GB   disk0
... snip
/dev/disk3
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:     FDisk_partition_scheme                        *8.2 GB     disk3
   1:                 DOS_FAT_32 NO NAME                 8.2 GB     disk3s1

What you are looking for here is the first and last /dev/disk# that represent your USB keys. In my case this is 3 – 23. From there we start the copy:

for i in `jot 23 3`; do asr --noverify --erase --noprompt --source /Volumes/No\ Name --target /dev/disk${i}s1 & done

In the above, note the –source specifies the /Volume/No\ Name\ ## that represents where we mounted the image. What it does then, is loop over each usb disk copying the data from the image.

Methodology 2

This is a variant of the work here.

Now that you’ve got the image and it’s mounted, plug in the USB hub containing your keys and run the following from your terminal:

diskutil list
$ diskutil list
/dev/disk0
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:      GUID_partition_scheme                        *251.0 GB   disk0
... snip
/dev/disk3
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:     FDisk_partition_scheme                        *8.2 GB     disk3
   1:                 DOS_FAT_32 NO NAME                 8.2 GB     disk3s1

What you are looking for here is the first and last /dev/disk# that represent your USB keys. In my case this is 3 – 27.

First unmount the disks:

for i in `jot 25 3`; do diskutil unmountDisk /dev/disk${i}; done

Next, use homebrew to install PV if you don’t have it:

brew install pv

Finally start the copy:

sudo dd if=/dev/disk2 |pv| tee >(sudo dd of=/dev/disk3 bs=16m) >(sudo dd of=/dev/disk4 bs=16m) >(sudo dd of=/dev/disk5 bs=16m) >(sudo dd of=/dev/disk26 bs=16m) >(sudo dd of=/dev/disk7 bs=16m) >(sudo dd of=/dev/disk8 bs=16m) >(sudo dd of=/dev/disk9 bs=16m) >(sudo dd of=/dev/disk10 bs=16m) >(sudo dd of=/dev/disk11 bs=16m) >(sudo dd of=/dev/disk12 bs=16m) >(sudo dd of=/dev/disk13 bs=16m) >(sudo dd of=/dev/disk14 bs=16m) >(sudo dd of=/dev/disk15 bs=16m) >(sudo dd of=/dev/disk16 bs=16m) >(sudo dd of=/dev/disk17 bs=16m) >(sudo dd of=/dev/disk18 bs=16m) >(sudo dd of=/dev/disk19 bs=16m) >(sudo dd of=/dev/disk20 bs=16m) >(sudo dd of=/dev/disk21 bs=16m) >(sudo dd of=/dev/disk22 bs=16m) >(sudo dd of=/dev/disk23 bs=16m) >(sudo dd of=/dev/disk24 bs=16m) >(sudo dd of=/dev/disk25 bs=16m) | sudo dd of=/dev/disk27 bs=16m

Ok, that is a single line. It is also terrible terrible terrible, but it works. Some notes:
You need a >(sudo dd) section for each disk except the last one. You will also need to change these to match your environment.

Hi-ho Hi-ho, Off to OSCON We Go!

For those that didn’t get the messge on the twitters, I will be at OSCON this year. Specifically, I will be helping Egle run a “Getting Started with OpenStack” tutorial.

The tutorial will begin with an overview of OpenStack and its different components. We will then provide access to the individual OpenStack instances to the participants, and walk them through using OpenStack’s web interface followed by command line tutorial.
The tutorial will cover instance life cycle (creation, management, deletion), networking, user management, and how to utilize different storage services available in OpenStack.

If you will be there, please drop by and say hello!

Installing a New Linux Kernel on Ubuntu 12.04

Installing a New Linux Kernel on Ubuntu 12.04

Whilst this may be common knowledge for those who have been working with Ubuntu for a while, it took me a few google searches to figure out, so here is the process I have settled on.

sudo apt-get update
# List available versions
sudo apt-cache search linux-image-generic-lts
linux-image-generic-lts-quantal - Generic Linux kernel image
linux-image-generic-lts-raring - Generic Linux kernel image
linux-image-generic-lts-saucy - Generic Linux kernel image
linux-image-generic-lts-trusty - Generic Linux kernel image
# Install one
sudo apt-get install -y linux-image-generic-lts-trusty
sudo reboot

After a reboot, things should come up nice and happy.

Summer Reading List

I’ve normally got at least 3-4 books I’m reading at any given point. Sometimes they’re even related to what I do. That said, I don’t much like the echo chamber that becomes my own head, so, I asked via Twitter this morning what folks are reading and came up with a few good reads:

Work-ish/Business-ish books:

That is to say, books in this list will be related to things at work, be they vSphere or OpenStack books, DevOps, or Presentations, thinking, etc. Basically, nothing involving dragons or so:

Leisure Reading

These are ‘more fun’ books.

Importing OpenStack Keystone Users & Tenants into OpenLDAP

If you’ve been following along, we installed OpenLDAP here, that or you have one of your own going. Either way, the goal of this post, is to pull information out of OpenStack Keystone for import into OpenLDAP. Preparing us to transition Keystone backend over to LDAP.

Getting started

To get started, we’ll assume a working LDAP and Keystone setup, and that you are logged into the server(s) hosting each.

Validating connections

Before we get too far into this, let’s validate we’re indeed able to get information from both services.

  • Keystone
# keystone user-list
+----------------------------------+------------+---------+--------------------
|                id                |    name    | enabled |       email
+----------------------------------+------------+---------+--------------------
| 1ccfc42014e04d4cb1e3818bef855ad0 |   admin    |   True  |   root@localhost
| f9404ed9fd864070a11e0b76671f99c2 | ceilometer |   True  |   heat@localhost
| f7c4d64657ab463cb38812a39b8f8a89 |   cinder   |   True  |  cinder@localhost
....
  • OpenLDAP
# slapcat
dn: dc=cook,dc=book
objectClass: top
objectClass: dcObject
objectClass: organization
...

Exporting from keystone to ldif

The ldif format allows us to import easily back into ldap. What follows is a script that will run some keystone commands, do some output cleanup, and then export things into an ldif file. The script is as follows:

SUFFIX='dc=cook,dc=book'
LDIF='/tmp/cookbook.ldif'
echo -n > $LDIF
# Make our OUs
echo "dn: ou=Roles,$SUFFIX" >> $LDIF
echo "objectclass:organizationalunit" >> $LDIF
echo "ou: Roles" >> $LDIF
echo "description: generic groups branch" >> $LDIF
echo -e "\n" >> $LDIF
echo "dn: ou=Users,$SUFFIX" >> $LDIF
echo "objectclass:organizationalunit" >> $LDIF
echo "ou: Users" >> $LDIF
echo "description: generic groups branch" >> $LDIF
echo -e "\n" >> $LDIF
echo "dn: ou=Groups,$SUFFIX" >> $LDIF
echo "objectclass:organizationalunit" >> $LDIF
echo "ou: Groups" >> $LDIF
echo "description: generic groups branch" >> $LDIF
echo -e "\n" >> $LDIF
for line in `keystone role-list | awk '($4 != "name") && ($4 != "") {print $4}'`
do
    CN=$line
    echo "dn: cn=$CN,ou=Roles,$SUFFIX" >> $LDIF
    echo "objectClass: organizationalRole" >> $LDIF
    echo "cn: $CN" >> $LDIF
    echo -e "\n" >> $LDIF
done
for line in `keystone user-list | awk '($4 != "name") && ($4 != "") {print $4}'`
do
    CN=$line
    echo "dn: cn=$CN,ou=Users,$SUFFIX" >> $LDIF
    echo "objectClass: inetOrgPerson" >> $LDIF
    echo "cn: $CN" >> $LDIF
    echo "sn: cookbook" >> $LDIF
    echo -e "\n" >> $LDIF
done
for line in `keystone tenant-list | awk '($4 != "name") && ($4 != "") {print $4}'`
do
    CN=$line
    echo "dn: cn=$CN,ou=Groups,$SUFFIX" >> $LDIF
    echo "objectClass: groupOfNames" >> $LDIF
    echo "member: cn=admin,$SUFFIX" >> $LDIF
    echo "cn: $CN" >> $LDIF
    echo -e "\n" >> $LDIF
done

This assumes you have connectivity between where you are running the script and Keystone.

This should produce a file that looks like this:

dn: cn=cookbook,ou=Groups,dc=cook,dc=book
objectClass: groupOfNames
cn: cookbook
dn: cn=service,ou=Groups,dc=cook,dc=book
objectClass: groupOfNames
cn: service
dn: cn=Member,ou=Roles,dc=cook,dc=book
objectClass: organizationalRole
cn: Member
dn: cn=_member_,ou=Roles,dc=cook,dc=book
objectClass: organizationalRole

You should now be able to use the resulting ldif file to import these things into LDAP

Resources

Quiet or Unattended Installing OpenLDAP on Ubuntu 14.04

After much looking about, I couldn’t actually find a decent writeup on this. At least not all in one place. So after having beat my face against it for a little while (repeatedly), I came up with the below.

Installing OpenLDAP

You’ll need an Ubuntu 14.04 box for this (tho, 12.04 should also work). Once logged in, you can run the following commands to make things work:

export DEBIAN_FRONTEND=noninteractive
echo -e " \
slapd    slapd/internal/generated_adminpw    password   openstack
slapd    slapd/password2    password    openstack
slapd    slapd/internal/adminpw    password openstack
slapd    slapd/password1    password    openstack
" | sudo debconf-set-selections
sudo apt-get install -y slapd ldap-utils

After this goes through the motions, you can then check that things are working by running

sudo slapcat

Resources