#vBrownBag TechTalk Schedule at OpenStack Summit Paris

We have only one month to go until the OpenStack Summit in Paris.

Below is the schedule of TechTalks at the summit, as you will see the three days of TechTalks is absolutely packed. We will be in room 126/127 of the Palais De Congres for the three days of the Main Conference. There will be people presenting all the time, so drop by and watch live.

We do still have spaces for more sponsors, contact alastair@vbrownbag.com for details.

Continue reading

TechTalks at OpenStack Summit Paris

We had an awesome time at the Atlanta summit, met a heap of new people and made a pile of great video. If you haven’t already checked it out this page has session titles with links to the video or the whole YouTube playlist is here.

Sign up for your tech talk here.

OpenStackParis

Since Atlanta was so great we are going to go to OpenStack Summit Paris and do it all again, only this time more. We will be making TechTalk videos for the three days of the main conference. Now that the OpenStack community has seen what vBrownBag TechTalks are we hope that way more sessions will be proposed. At Atlanta we made 45 community videos, in Paris I’d like us to make 70. If you have something useful to say to the OpenStack community, something new or different, a project or implementation to discuss then propose a TechTalk. Simply fill in this form and I will be in touch close to the conference to organize with you. If you have any questions you can reach me alastair@vBrownBag.com

Naturally we are looking for sponsors to support these TechTalks. If your company would like to support this community activity please email me alastair@vBrownBag.com.

vBrownBag is going to Paris, Yay.

Enabling Nested KVM In Devstack on Fusion

There are times (like when you’re figuring out a post like this) that you will need to fire up devstack in a VM, and in said VM ensure KVM is running. Here’s how to do that:

  1. Add ‘vmv.allow = TRUE’ to the VMX file. I do this in vagrant like so:
# VMware Fusion
config.vm.provider "vmware_fusion" do |vmware, override|
    override.vm.box = "trusty64_fusion"
    ... snip ...
    vmware.vmx["memsize"] = "6144"
    vmware.vmx["numvcpus"] = "2"
    vmware.vmx["vhv.enable"] = "TRUE"
end
  1. Install KVM into the VM:

sudo apt-get update && sudo apt-get install -y kvm libvirt-bin virtinst

  1. Run kvm-ok
~# kvm-ok
INFO: /dev/kvm exists
KVM acceleration can be used
  1. Finally, run ./stack.sh

Using top to Find Memory Overhead for qemu/KVM

Having jumped from the VMware world where the memory overheads and performance characteristics of ESXi and varied workloads are well known, I was surprised to learn that such was not always true in the case of KVM. In this post we talk a bit about KVM/Qemu, and how it’s memory overhead breaks down.

Understanding Memory in KVM

The gist of KVM memory management, which you can read about in detail here, is that each guest is more or less a Linux process. That is, at spawn time, if the guest has 1GB memory assigned, KVM will do a malloc for 1GB and then let the system manage as needed.

This enables interesting things like KSM or Kernel Shared Memory, which, on the surface, works much like Transparent Page Sharing or TPS in the ESXi world. That is, the Linux kernel will periodically check it’s page tables for matching pages.

Still with me?

A quick TL;DR for understanding KVM memory: KVM guests & memory, are managed as Linux processes. Thus, they inherit and share memory from the parent qemu process. KSM will in turn reduce memory duplication.

Finding the Per VM Overhead

So, knowing that memory is inherited from the parent qemu process and KSM will in turn ‘dedupe’ memory pages, how do we find the per VM overhead? The answer to this isn’t exactly straight forward. That is, there is no esxtop for KVM that will tell you this right out. That said, with top (or some scripting) you can get from A -> Overhead fairly simply.

Our Environment

Our environment is a nested KVM setup on Ubuntu 14.04 with 6GB total ram. From there we are booting 2x Fedora 20 instances with 2GB ram assigned to each:

$ nova flavor-list
+-----+-----------+-----------+------+
| ID  | Name      | Memory_MB | Disk |
+-----+-----------+-----------+------+
| 1   | m1.tiny   | 512       | 1    |
| 2   | m1.small  | 2048      | 20   |

and:

$ nova show 6e551cb0-9ace-4084-afb3-3f46274f3717
+--------------------------------------+-----------
| Property                             | Value
+--------------------------------------+-----------
...
| created                              | 2014-07-26T02:54:40Z
| flavor                               | m1.small (2)

Finding the Overhead With top

top is a great tool, and fairly easy to use. For what we’re looking for, we’re going to sort on memory “M” to get an idea of what’s going on:

  PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND
18239 libvirt+  20   0 3487916 289892   9448 S   0.7  4.7   0:51.00 qemu-system-x86
15453 libvirt+  20   0 3491580 287156   9448 S   0.7  4.7   1:01.33 qemu-system-x86

There are a few interesting columns here:
- VIRT = Total of all virtual memory associated with a process. That is, the memory requested by the thing. In this case about 3.5GB
- RES – Resident memory size, or the physical memory associated with the process. In this case about 3GB

So, looking at these, it seems, for a 2GB VM, we lose about 512MB in non-shared overhead and about another 9.5MB in shared memory overhead.

Summary

In this post, we explained a bit about how Memory works in KVM. We then booted some instances, and finally used top sorted by memory “M” to find the memory used by our VMs.

Top is by no means the only way to do this. However, it provides a wealth of information and should be one of the first tools in your toolkit.

Resources

Some Numbers

As I get into this performance thing with Mr Kevin. I thought I’d share a few numbers from early builds and what not. You know, the sort of “should we do this” thing kind of stuff.

These nubmers are for a 5 node build using the Vagrantfiles here:

NodeCPURam
Controller 111.5GB
Controller 212GB
Compute 122GB
Cinder 11.5GB
Chef11GB

The hardware involved is a late 2013 Retina MacBook Pro:
- 2.5 Ghz Core i5
- 8GB Ram
- SSD

Note, this environment will be similar, but not the only environment we build out for the OpenStack Summit and the Performance book. The theory being, a change or two here can be more quickly tested for positive or negative effect than rebuilding a full deployment.

Results

Below are the best results of 3 reach for each setup.

Some other things about this:
- I have no horse in the race for either platform or OS. Just running some vagrant-ups.
- This is done over 3 runs of each. Mostly not using the laptop. Basically, I did a /lot/ of reading during this time.
- These are pretty manila. That is, no caching, no performance hacks on Fusion or vBox.
- Virtualbox and my MacBook don’t get along under the load of the environment. Basically after hours of lost time, many hard crashes, and such, I’m going with “n/a” on performance numbers for vBox. At least for now.

VMware Fusion and Ubuntu 12.04

real    63m25.441s
user    2m30.644s
sys 0m24.843s

VMware Fusion and Centos 6.5

real    124m32.479s
user    3m5.562s
sys 0m28.180s

Virtualbox and Ubuntu 12.04

na

Virtualbox and Centos 6.5

na

Notes on Virtualbox:
During the testing for anything on virtualbox in this list, I encountered no small amount of hard crashes. For CentOS alone, about 15 hours was sunk. As I stated before, I don’t have a horse in the race between one or the other, but damn was this a challenge. If you can make it run and don’t mind helping, drop a comment with the output of time vagrant up --provider=virtualbox for this environment, and what hardware you’re running it on.

OSCON – Day 00 (or is it 1?)

Ok, so yesterday (two days ago now) started with “Day -1″ but mayhaps I’m just an idiot, who knows. So, today’s workshop? Well, it was pretty amazing. Lots of good questions, and the lab worked for most folks. We did have some questions come up around what CLI commands we were copying/pasting during the preso part. Those can be found here.

As a reminder, the materials (slides, and everything else for those playing along at home) can be found here.

Other things to note from the day:
- The Japanese Gardens in Portland are pretty amazing
- If you tell people you are Batman, they don’t think you’re too crazy. At least not in Portland.
- Converse Makes some decent shoes.
- So. Much. Batman.

https://pbs.twimg.com/media/Bs__s5xCcAA7Nmr.jpg

OSCON – Day -1 Getting Started

All kinds of checked into the hotel room. Speaker badge acquired, and scoped out the presentation room for tomorrow. If you haven’t already and are going to be here, you can download all of the materials here.

If your hote wifi sucks, we’ll have all of this on USB keys

https://pbs.twimg.com/media/Bs8YFXjCcAAecK5.jpg

https://pbs.twimg.com/media/Bs8Ui_YCEAA3JN7.jpg

https://pbs.twimg.com/media/Bs8MtJzCAAEMT2f.jpg

Other Day 1 Tidbits – Pivotal Yak Hoodie

The Yak hoodie instructions are teh broke. That said, @eglute helped me get it going. If the instructions on the paper don’t work for you, download the cf client here & try this:

$ cf target api.run.pivotal.io
FAILED
Incorrect Usage.
NAME:
   target - Set or view the targeted org or space
ALIAS:
   t
USAGE:
   cf target [-o ORG] [-s SPACE]
OPTIONS:
   -o   organization
   -s   space
$ cf login
API endpoint> api.run.pivotal.io

Also, after bundle install, cf push fails w/o a name.

$ cf push
FAILED
Error: App name is a required field
$ cf push lolpancakes
...snip...
App started
Showing health and status for app lolpancakes in org bunchc-org / space development as bunchc@gmail.com...
OK
requested state: started
instances: 1/1
usage: 1G x 1 instances
urls: lolpancakes.cfapps.io
     state     since                    cpu    memory        disk
#0   running   2014-07-19 05:41:16 PM   0.0%   22.8M of 1G   45.6M of 1G

Finally!

OSCON Lab Materials

tl;dr Download our OSCON lab materials here.

As a follow-up on my coming to OSCON, I thought it prudent to provide some info & downloads for the lab ahead of time.

Lab Materials

While we will have USB keys in the tutorial for everyone, we figure some of y’all might want to get started early. With that in mind, the lab materials can be downloaded here, but be aware, it’s about 4GB of stuff to download.

  • Slides – Both the PPT & PDF of the slides
  • openstackicehouse.ova – The vAPP we will use in the lab
  • OpenStack_command_guide_reference.pdf – A quick reference for OpenStack CLI commands
  • Access_virtualbox_allinone.pdf – A guide for accessing the lab
  • cirros-0.3.1-x86_64-disk.img – Used in the labs
  • Osco Solutions/ – All of the labs we will be doing
  • Couch to OpenStack/ – An additional 12 hours of Getting Started with OpenStack Material
  • VirtualBox/ – Contains the VirtualBox installer for OSX, Linux, and Windows

Really, you can get the materials here

Prerequisites

To be successful in the lab, there are a few things you will need. None of these are too complex or too deep, but having them will improve your experience overall.

  • A laptop with a minimum of 4GB free ram
  • VirtualBox or VMware Fusion/Workstation/Player installed
  • An SSH client. On Windows, Putty works well.

Some Random Statistics

Building the USB keys was an exercise in insanity. The setup looks kinda like this:
https://pbs.twimg.com/media/BstHHTaCMAACUTk.jpg

The fan was added after the first batch nearly melted the USB hub. The smell of burnt silicon was pretty intense.

  • Each key contains about 4GB of data.
  • We’re copying them 24 at a time and seeing:
    • 40 min to finish all 24 disks
    • 45MB/sec (Yes Megabytes) sustained transfer
    • 12,000 IOPS largely write

USB Key Duplication on OSX on the Cheap

Edit: As I got a bit deeper into the copies, a new method was needed.

Common

First, make an image of the usb disk in question. To do this, open Disk Utility, and then:

  1. Click File
  2. Click New
  3. Click “New Image From Folder…”
  4. Select your folder
  5. Wait

Next, find the image file in finder & mount it, record the place it was mounted.

Methodology 1

This is a variant of the work here.

Now that you’ve got the image and it’s mounted, plug in the USB hub containing your keys and run the following from your terminal:

diskutil list
$ diskutil list
/dev/disk0
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:      GUID_partition_scheme                        *251.0 GB   disk0
... snip
/dev/disk3
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:     FDisk_partition_scheme                        *8.2 GB     disk3
   1:                 DOS_FAT_32 NO NAME                 8.2 GB     disk3s1

What you are looking for here is the first and last /dev/disk# that represent your USB keys. In my case this is 3 – 23. From there we start the copy:

for i in `jot 23 3`; do asr --noverify --erase --noprompt --source /Volumes/No\ Name --target /dev/disk${i}s1 & done

In the above, note the –source specifies the /Volume/No\ Name\ ## that represents where we mounted the image. What it does then, is loop over each usb disk copying the data from the image.

Methodology 2

This is a variant of the work here.

Now that you’ve got the image and it’s mounted, plug in the USB hub containing your keys and run the following from your terminal:

diskutil list
$ diskutil list
/dev/disk0
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:      GUID_partition_scheme                        *251.0 GB   disk0
... snip
/dev/disk3
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:     FDisk_partition_scheme                        *8.2 GB     disk3
   1:                 DOS_FAT_32 NO NAME                 8.2 GB     disk3s1

What you are looking for here is the first and last /dev/disk# that represent your USB keys. In my case this is 3 – 27.

First unmount the disks:

for i in `jot 25 3`; do diskutil unmountDisk /dev/disk${i}; done

Next, use homebrew to install PV if you don’t have it:

brew install pv

Finally start the copy:

sudo dd if=/dev/disk2 |pv| tee >(sudo dd of=/dev/disk3 bs=16m) >(sudo dd of=/dev/disk4 bs=16m) >(sudo dd of=/dev/disk5 bs=16m) >(sudo dd of=/dev/disk26 bs=16m) >(sudo dd of=/dev/disk7 bs=16m) >(sudo dd of=/dev/disk8 bs=16m) >(sudo dd of=/dev/disk9 bs=16m) >(sudo dd of=/dev/disk10 bs=16m) >(sudo dd of=/dev/disk11 bs=16m) >(sudo dd of=/dev/disk12 bs=16m) >(sudo dd of=/dev/disk13 bs=16m) >(sudo dd of=/dev/disk14 bs=16m) >(sudo dd of=/dev/disk15 bs=16m) >(sudo dd of=/dev/disk16 bs=16m) >(sudo dd of=/dev/disk17 bs=16m) >(sudo dd of=/dev/disk18 bs=16m) >(sudo dd of=/dev/disk19 bs=16m) >(sudo dd of=/dev/disk20 bs=16m) >(sudo dd of=/dev/disk21 bs=16m) >(sudo dd of=/dev/disk22 bs=16m) >(sudo dd of=/dev/disk23 bs=16m) >(sudo dd of=/dev/disk24 bs=16m) >(sudo dd of=/dev/disk25 bs=16m) | sudo dd of=/dev/disk27 bs=16m

Ok, that is a single line. It is also terrible terrible terrible, but it works. Some notes:
You need a >(sudo dd) section for each disk except the last one. You will also need to change these to match your environment.