vBrownBag TechTalks at Vancouver gather steam

With only 47 days remaining until the OpenStack Summit in Vancouver we are getting pretty excited.

We have 70 TechTalks proposed, so well on our way to making over 90 videos at this summit. There is still time to sign up to present your own TechTalk using this form.

It looks like we will have more TechTalks than we can fit in the schedule. We accept TechTalks in the order they are proposed, so get yours in early to avoid disappointment.

I’d like to welcome our first sponsor, HP are returning to support these community presentations.It is the sponsors like HP that make the TechTalks possible, so thank you HP.


We do have a couple of places available for sponsors, if your organization would like to become a sponsor of the vBrownBag TechTalks then contact me alastair@vbrownbag.com for details.

#vBrownBag TechTalks in Vancouver

It has only been a couple of months since we had an amazing week at the Paris OpenStack summit. Now it is time to look forward to the Vancouver summit in May. The vBrownBag crew are delighted to be returning to make TechTalk videos at our third summit.

This year will have a focus on the women involved in OpenStack, we plan to fill Tuesday’s schedule with women presenters. If you are a woman of OpenStack then consider presenting a TechTalk.

If you already know what a TechTalk is and want to register then go ahead and use this form.

TechTalks are ten minute presentations about some aspect of technology. TechTalks are delivered at a major conference. They allow a focussed look at a specific project, feature or technique. Presenters are usually the engineers or developers who built what they present. They have the deep knowledge without the marketing. All of the presentations are recorded and posted to YouTube, often we also live stream from the conference. Since TechTalks are short we can pack a lot into a day, at the Paris summit we recorded 70 TechTalks. With an extra day in Vancouver we hope to make 90 videos.

To get an idea of what TechTalks are you can find all the Paris videos in this YouTube playlist. The Atlanta videos are in this playlist.

As always we can only make these TechTalks with the help of our sponsors and the support of the OpenStack Summit organizers. If your organization would like to become a sponsor of the vBrownBag TechTalks then contact me alastair@vbrownbag.com for details. Sponsor logos are applied to every video. Sponsors are guaranteed two TechTalk presentation slots.

#vBrownBag TechTalks at OpenStack Summit Paris

We had an amazing time in Paris last week. Three days of packed TechTalk schedule meant that the crew didn’t get to see a lot of the Summit. Happily there is a fair amount of coverage available online.

brocade-logo-black-red-rgb  Cisco_Logo_RGB_Screen_2color

We produced a total of 72 videos at the summit, our highest number for any conference where we’ve made TechTalks. Naturally getting the videos onto YouTube was a priority, so people not in Paris could see what we saw. You can find all of the TechTalks in this playlist and the schedule page now has links to every video. We could not have made these TechTalks without the support of our wonderful sponsors, Cisco and Brocade. Thanks for helping us and the community.

I feel very privileged to have been part of this event. Not only did we have a huge number of truly excellent presentations but we got to be tourists in Paris for a while. The centre of Paris is amazing, beautiful architecture and great food, I highly recommend visiting.

We are looking forward to another awesome summit in Vancouver with lots of TechTalks next year, hope to see you there.

#vBrownBag TechTalk Schedule at OpenStack Summit Paris

We have only one month to go until the OpenStack Summit in Paris.

Below is the schedule of TechTalks at the summit, as you will see the three days of TechTalks is absolutely packed. We will be in room 126/127 of the Palais De Congres for the three days of the Main Conference. There will be people presenting all the time, so drop by and watch live.

We do still have spaces for more sponsors, contact alastair@vbrownbag.com for details.

Continue reading

TechTalks at OpenStack Summit Paris

We had an awesome time at the Atlanta summit, met a heap of new people and made a pile of great video. If you haven’t already checked it out this page has session titles with links to the video or the whole YouTube playlist is here.

Sign up for your tech talk here.


Since Atlanta was so great we are going to go to OpenStack Summit Paris and do it all again, only this time more. We will be making TechTalk videos for the three days of the main conference. Now that the OpenStack community has seen what vBrownBag TechTalks are we hope that way more sessions will be proposed. At Atlanta we made 45 community videos, in Paris I’d like us to make 70. If you have something useful to say to the OpenStack community, something new or different, a project or implementation to discuss then propose a TechTalk. Simply fill in this form and I will be in touch close to the conference to organize with you. If you have any questions you can reach me alastair@vBrownBag.com

Naturally we are looking for sponsors to support these TechTalks. If your company would like to support this community activity please email me alastair@vBrownBag.com.

vBrownBag is going to Paris, Yay.

Enabling Nested KVM In Devstack on Fusion

There are times (like when you’re figuring out a post like this) that you will need to fire up devstack in a VM, and in said VM ensure KVM is running. Here’s how to do that:

  1. Add ‘vmv.allow = TRUE’ to the VMX file. I do this in vagrant like so:
# VMware Fusion
config.vm.provider "vmware_fusion" do |vmware, override|
    override.vm.box = "trusty64_fusion"
    ... snip ...
    vmware.vmx["memsize"] = "6144"
    vmware.vmx["numvcpus"] = "2"
    vmware.vmx["vhv.enable"] = "TRUE"
  1. Install KVM into the VM:

sudo apt-get update && sudo apt-get install -y kvm libvirt-bin virtinst

  1. Run kvm-ok
~# kvm-ok
INFO: /dev/kvm exists
KVM acceleration can be used
  1. Finally, run ./stack.sh

Using top to Find Memory Overhead for qemu/KVM

Having jumped from the VMware world where the memory overheads and performance characteristics of ESXi and varied workloads are well known, I was surprised to learn that such was not always true in the case of KVM. In this post we talk a bit about KVM/Qemu, and how it’s memory overhead breaks down.

Understanding Memory in KVM

The gist of KVM memory management, which you can read about in detail here, is that each guest is more or less a Linux process. That is, at spawn time, if the guest has 1GB memory assigned, KVM will do a malloc for 1GB and then let the system manage as needed.

This enables interesting things like KSM or Kernel Shared Memory, which, on the surface, works much like Transparent Page Sharing or TPS in the ESXi world. That is, the Linux kernel will periodically check it’s page tables for matching pages.

Still with me?

A quick TL;DR for understanding KVM memory: KVM guests & memory, are managed as Linux processes. Thus, they inherit and share memory from the parent qemu process. KSM will in turn reduce memory duplication.

Finding the Per VM Overhead

So, knowing that memory is inherited from the parent qemu process and KSM will in turn ‘dedupe’ memory pages, how do we find the per VM overhead? The answer to this isn’t exactly straight forward. That is, there is no esxtop for KVM that will tell you this right out. That said, with top (or some scripting) you can get from A -> Overhead fairly simply.

Our Environment

Our environment is a nested KVM setup on Ubuntu 14.04 with 6GB total ram. From there we are booting 2x Fedora 20 instances with 2GB ram assigned to each:

$ nova flavor-list
| ID  | Name      | Memory_MB | Disk |
| 1   | m1.tiny   | 512       | 1    |
| 2   | m1.small  | 2048      | 20   |


$ nova show 6e551cb0-9ace-4084-afb3-3f46274f3717
| Property                             | Value
| created                              | 2014-07-26T02:54:40Z
| flavor                               | m1.small (2)

Finding the Overhead With top

top is a great tool, and fairly easy to use. For what we’re looking for, we’re going to sort on memory “M” to get an idea of what’s going on:

18239 libvirt+  20   0 3487916 289892   9448 S   0.7  4.7   0:51.00 qemu-system-x86
15453 libvirt+  20   0 3491580 287156   9448 S   0.7  4.7   1:01.33 qemu-system-x86

There are a few interesting columns here:
- VIRT = Total of all virtual memory associated with a process. That is, the memory requested by the thing. In this case about 3.5GB
- RES – Resident memory size, or the physical memory associated with the process. In this case about 3GB

So, looking at these, it seems, for a 2GB VM, we lose about 512MB in non-shared overhead and about another 9.5MB in shared memory overhead.


In this post, we explained a bit about how Memory works in KVM. We then booted some instances, and finally used top sorted by memory “M” to find the memory used by our VMs.

Top is by no means the only way to do this. However, it provides a wealth of information and should be one of the first tools in your toolkit.


Some Numbers

As I get into this performance thing with Mr Kevin. I thought I’d share a few numbers from early builds and what not. You know, the sort of “should we do this” thing kind of stuff.

These nubmers are for a 5 node build using the Vagrantfiles here:

Controller 111.5GB
Controller 212GB
Compute 122GB
Cinder 11.5GB

The hardware involved is a late 2013 Retina MacBook Pro:
- 2.5 Ghz Core i5
- 8GB Ram

Note, this environment will be similar, but not the only environment we build out for the OpenStack Summit and the Performance book. The theory being, a change or two here can be more quickly tested for positive or negative effect than rebuilding a full deployment.


Below are the best results of 3 reach for each setup.

Some other things about this:
- I have no horse in the race for either platform or OS. Just running some vagrant-ups.
- This is done over 3 runs of each. Mostly not using the laptop. Basically, I did a /lot/ of reading during this time.
- These are pretty manila. That is, no caching, no performance hacks on Fusion or vBox.
- Virtualbox and my MacBook don’t get along under the load of the environment. Basically after hours of lost time, many hard crashes, and such, I’m going with “n/a” on performance numbers for vBox. At least for now.

VMware Fusion and Ubuntu 12.04

real    63m25.441s
user    2m30.644s
sys 0m24.843s

VMware Fusion and Centos 6.5

real    124m32.479s
user    3m5.562s
sys 0m28.180s

Virtualbox and Ubuntu 12.04


Virtualbox and Centos 6.5


Notes on Virtualbox:
During the testing for anything on virtualbox in this list, I encountered no small amount of hard crashes. For CentOS alone, about 15 hours was sunk. As I stated before, I don’t have a horse in the race between one or the other, but damn was this a challenge. If you can make it run and don’t mind helping, drop a comment with the output of time vagrant up --provider=virtualbox for this environment, and what hardware you’re running it on.

OSCON – Day 00 (or is it 1?)

Ok, so yesterday (two days ago now) started with “Day -1″ but mayhaps I’m just an idiot, who knows. So, today’s workshop? Well, it was pretty amazing. Lots of good questions, and the lab worked for most folks. We did have some questions come up around what CLI commands we were copying/pasting during the preso part. Those can be found here.

As a reminder, the materials (slides, and everything else for those playing along at home) can be found here.

Other things to note from the day:
- The Japanese Gardens in Portland are pretty amazing
- If you tell people you are Batman, they don’t think you’re too crazy. At least not in Portland.
- Converse Makes some decent shoes.
- So. Much. Batman.