Using top to Find Memory Overhead for qemu/KVM

Having jumped from the VMware world where the memory overheads and performance characteristics of ESXi and varied workloads are well known, I was surprised to learn that such was not always true in the case of KVM. In this post we talk a bit about KVM/Qemu, and how it’s memory overhead breaks down.

Understanding Memory in KVM

The gist of KVM memory management, which you can read about in detail here, is that each guest is more or less a Linux process. That is, at spawn time, if the guest has 1GB memory assigned, KVM will do a malloc for 1GB and then let the system manage as needed.

This enables interesting things like KSM or Kernel Shared Memory, which, on the surface, works much like Transparent Page Sharing or TPS in the ESXi world. That is, the Linux kernel will periodically check it’s page tables for matching pages.

Still with me?

A quick TL;DR for understanding KVM memory: KVM guests & memory, are managed as Linux processes. Thus, they inherit and share memory from the parent qemu process. KSM will in turn reduce memory duplication.

Finding the Per VM Overhead

So, knowing that memory is inherited from the parent qemu process and KSM will in turn ‘dedupe’ memory pages, how do we find the per VM overhead? The answer to this isn’t exactly straight forward. That is, there is no esxtop for KVM that will tell you this right out. That said, with top (or some scripting) you can get from A -> Overhead fairly simply.

Our Environment

Our environment is a nested KVM setup on Ubuntu 14.04 with 6GB total ram. From there we are booting 2x Fedora 20 instances with 2GB ram assigned to each:

$ nova flavor-list
| ID  | Name      | Memory_MB | Disk |
| 1   | m1.tiny   | 512       | 1    |
| 2   | m1.small  | 2048      | 20   |


$ nova show 6e551cb0-9ace-4084-afb3-3f46274f3717
| Property                             | Value
| created                              | 2014-07-26T02:54:40Z
| flavor                               | m1.small (2)

Finding the Overhead With top

top is a great tool, and fairly easy to use. For what we’re looking for, we’re going to sort on memory “M” to get an idea of what’s going on:

18239 libvirt+  20   0 3487916 289892   9448 S   0.7  4.7   0:51.00 qemu-system-x86
15453 libvirt+  20   0 3491580 287156   9448 S   0.7  4.7   1:01.33 qemu-system-x86

There are a few interesting columns here:
- VIRT = Total of all virtual memory associated with a process. That is, the memory requested by the thing. In this case about 3.5GB
- RES – Resident memory size, or the physical memory associated with the process. In this case about 3GB

So, looking at these, it seems, for a 2GB VM, we lose about 512MB in non-shared overhead and about another 9.5MB in shared memory overhead.


In this post, we explained a bit about how Memory works in KVM. We then booted some instances, and finally used top sorted by memory “M” to find the memory used by our VMs.

Top is by no means the only way to do this. However, it provides a wealth of information and should be one of the first tools in your toolkit.


Some Numbers

As I get into this performance thing with Mr Kevin. I thought I’d share a few numbers from early builds and what not. You know, the sort of “should we do this” thing kind of stuff.

These nubmers are for a 5 node build using the Vagrantfiles here:

Controller 111.5GB
Controller 212GB
Compute 122GB
Cinder 11.5GB

The hardware involved is a late 2013 Retina MacBook Pro:
- 2.5 Ghz Core i5
- 8GB Ram

Note, this environment will be similar, but not the only environment we build out for the OpenStack Summit and the Performance book. The theory being, a change or two here can be more quickly tested for positive or negative effect than rebuilding a full deployment.


Below are the best results of 3 reach for each setup.

Some other things about this:
- I have no horse in the race for either platform or OS. Just running some vagrant-ups.
- This is done over 3 runs of each. Mostly not using the laptop. Basically, I did a /lot/ of reading during this time.
- These are pretty manila. That is, no caching, no performance hacks on Fusion or vBox.
- Virtualbox and my MacBook don’t get along under the load of the environment. Basically after hours of lost time, many hard crashes, and such, I’m going with “n/a” on performance numbers for vBox. At least for now.

VMware Fusion and Ubuntu 12.04

real    63m25.441s
user    2m30.644s
sys 0m24.843s

VMware Fusion and Centos 6.5

real    124m32.479s
user    3m5.562s
sys 0m28.180s

Virtualbox and Ubuntu 12.04


Virtualbox and Centos 6.5


Notes on Virtualbox:
During the testing for anything on virtualbox in this list, I encountered no small amount of hard crashes. For CentOS alone, about 15 hours was sunk. As I stated before, I don’t have a horse in the race between one or the other, but damn was this a challenge. If you can make it run and don’t mind helping, drop a comment with the output of time vagrant up --provider=virtualbox for this environment, and what hardware you’re running it on.

OSCON – Day 00 (or is it 1?)

Ok, so yesterday (two days ago now) started with “Day -1″ but mayhaps I’m just an idiot, who knows. So, today’s workshop? Well, it was pretty amazing. Lots of good questions, and the lab worked for most folks. We did have some questions come up around what CLI commands we were copying/pasting during the preso part. Those can be found here.

As a reminder, the materials (slides, and everything else for those playing along at home) can be found here.

Other things to note from the day:
- The Japanese Gardens in Portland are pretty amazing
- If you tell people you are Batman, they don’t think you’re too crazy. At least not in Portland.
- Converse Makes some decent shoes.
- So. Much. Batman.

OSCON – Day -1 Getting Started

All kinds of checked into the hotel room. Speaker badge acquired, and scoped out the presentation room for tomorrow. If you haven’t already and are going to be here, you can download all of the materials here.

If your hote wifi sucks, we’ll have all of this on USB keys

Other Day 1 Tidbits – Pivotal Yak Hoodie

The Yak hoodie instructions are teh broke. That said, @eglute helped me get it going. If the instructions on the paper don’t work for you, download the cf client here & try this:

$ cf target
Incorrect Usage.
   target - Set or view the targeted org or space
   cf target [-o ORG] [-s SPACE]
   -o   organization
   -s   space
$ cf login
API endpoint>

Also, after bundle install, cf push fails w/o a name.

$ cf push
Error: App name is a required field
$ cf push lolpancakes
App started
Showing health and status for app lolpancakes in org bunchc-org / space development as
requested state: started
instances: 1/1
usage: 1G x 1 instances
     state     since                    cpu    memory        disk
#0   running   2014-07-19 05:41:16 PM   0.0%   22.8M of 1G   45.6M of 1G


OSCON Lab Materials

tl;dr Download our OSCON lab materials here.

As a follow-up on my coming to OSCON, I thought it prudent to provide some info & downloads for the lab ahead of time.

Lab Materials

While we will have USB keys in the tutorial for everyone, we figure some of y’all might want to get started early. With that in mind, the lab materials can be downloaded here, but be aware, it’s about 4GB of stuff to download.

  • Slides – Both the PPT & PDF of the slides
  • openstackicehouse.ova – The vAPP we will use in the lab
  • OpenStack_command_guide_reference.pdf – A quick reference for OpenStack CLI commands
  • Access_virtualbox_allinone.pdf – A guide for accessing the lab
  • cirros-0.3.1-x86_64-disk.img – Used in the labs
  • Osco Solutions/ – All of the labs we will be doing
  • Couch to OpenStack/ – An additional 12 hours of Getting Started with OpenStack Material
  • VirtualBox/ – Contains the VirtualBox installer for OSX, Linux, and Windows

Really, you can get the materials here


To be successful in the lab, there are a few things you will need. None of these are too complex or too deep, but having them will improve your experience overall.

  • A laptop with a minimum of 4GB free ram
  • VirtualBox or VMware Fusion/Workstation/Player installed
  • An SSH client. On Windows, Putty works well.

Some Random Statistics

Building the USB keys was an exercise in insanity. The setup looks kinda like this:

The fan was added after the first batch nearly melted the USB hub. The smell of burnt silicon was pretty intense.

  • Each key contains about 4GB of data.
  • We’re copying them 24 at a time and seeing:
    • 40 min to finish all 24 disks
    • 45MB/sec (Yes Megabytes) sustained transfer
    • 12,000 IOPS largely write

USB Key Duplication on OSX on the Cheap

Edit: As I got a bit deeper into the copies, a new method was needed.


First, make an image of the usb disk in question. To do this, open Disk Utility, and then:

  1. Click File
  2. Click New
  3. Click “New Image From Folder…”
  4. Select your folder
  5. Wait

Next, find the image file in finder & mount it, record the place it was mounted.

Methodology 1

This is a variant of the work here.

Now that you’ve got the image and it’s mounted, plug in the USB hub containing your keys and run the following from your terminal:

diskutil list
$ diskutil list
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:      GUID_partition_scheme                        *251.0 GB   disk0
... snip
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:     FDisk_partition_scheme                        *8.2 GB     disk3
   1:                 DOS_FAT_32 NO NAME                 8.2 GB     disk3s1

What you are looking for here is the first and last /dev/disk# that represent your USB keys. In my case this is 3 – 23. From there we start the copy:

for i in `jot 23 3`; do asr --noverify --erase --noprompt --source /Volumes/No\ Name --target /dev/disk${i}s1 & done

In the above, note the –source specifies the /Volume/No\ Name\ ## that represents where we mounted the image. What it does then, is loop over each usb disk copying the data from the image.

Methodology 2

This is a variant of the work here.

Now that you’ve got the image and it’s mounted, plug in the USB hub containing your keys and run the following from your terminal:

diskutil list
$ diskutil list
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:      GUID_partition_scheme                        *251.0 GB   disk0
... snip
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:     FDisk_partition_scheme                        *8.2 GB     disk3
   1:                 DOS_FAT_32 NO NAME                 8.2 GB     disk3s1

What you are looking for here is the first and last /dev/disk# that represent your USB keys. In my case this is 3 – 27.

First unmount the disks:

for i in `jot 25 3`; do diskutil unmountDisk /dev/disk${i}; done

Next, use homebrew to install PV if you don’t have it:

brew install pv

Finally start the copy:

sudo dd if=/dev/disk2 |pv| tee >(sudo dd of=/dev/disk3 bs=16m) >(sudo dd of=/dev/disk4 bs=16m) >(sudo dd of=/dev/disk5 bs=16m) >(sudo dd of=/dev/disk26 bs=16m) >(sudo dd of=/dev/disk7 bs=16m) >(sudo dd of=/dev/disk8 bs=16m) >(sudo dd of=/dev/disk9 bs=16m) >(sudo dd of=/dev/disk10 bs=16m) >(sudo dd of=/dev/disk11 bs=16m) >(sudo dd of=/dev/disk12 bs=16m) >(sudo dd of=/dev/disk13 bs=16m) >(sudo dd of=/dev/disk14 bs=16m) >(sudo dd of=/dev/disk15 bs=16m) >(sudo dd of=/dev/disk16 bs=16m) >(sudo dd of=/dev/disk17 bs=16m) >(sudo dd of=/dev/disk18 bs=16m) >(sudo dd of=/dev/disk19 bs=16m) >(sudo dd of=/dev/disk20 bs=16m) >(sudo dd of=/dev/disk21 bs=16m) >(sudo dd of=/dev/disk22 bs=16m) >(sudo dd of=/dev/disk23 bs=16m) >(sudo dd of=/dev/disk24 bs=16m) >(sudo dd of=/dev/disk25 bs=16m) | sudo dd of=/dev/disk27 bs=16m

Ok, that is a single line. It is also terrible terrible terrible, but it works. Some notes:
You need a >(sudo dd) section for each disk except the last one. You will also need to change these to match your environment.

Hi-ho Hi-ho, Off to OSCON We Go!

For those that didn’t get the messge on the twitters, I will be at OSCON this year. Specifically, I will be helping Egle run a “Getting Started with OpenStack” tutorial.

The tutorial will begin with an overview of OpenStack and its different components. We will then provide access to the individual OpenStack instances to the participants, and walk them through using OpenStack’s web interface followed by command line tutorial.
The tutorial will cover instance life cycle (creation, management, deletion), networking, user management, and how to utilize different storage services available in OpenStack.

If you will be there, please drop by and say hello!

Installing a New Linux Kernel on Ubuntu 12.04

Installing a New Linux Kernel on Ubuntu 12.04

Whilst this may be common knowledge for those who have been working with Ubuntu for a while, it took me a few google searches to figure out, so here is the process I have settled on.

sudo apt-get update
# List available versions
sudo apt-cache search linux-image-generic-lts
linux-image-generic-lts-quantal - Generic Linux kernel image
linux-image-generic-lts-raring - Generic Linux kernel image
linux-image-generic-lts-saucy - Generic Linux kernel image
linux-image-generic-lts-trusty - Generic Linux kernel image
# Install one
sudo apt-get install -y linux-image-generic-lts-trusty
sudo reboot

After a reboot, things should come up nice and happy.

Summer Reading List

I’ve normally got at least 3-4 books I’m reading at any given point. Sometimes they’re even related to what I do. That said, I don’t much like the echo chamber that becomes my own head, so, I asked via Twitter this morning what folks are reading and came up with a few good reads:

Work-ish/Business-ish books:

That is to say, books in this list will be related to things at work, be they vSphere or OpenStack books, DevOps, or Presentations, thinking, etc. Basically, nothing involving dragons or so:

Leisure Reading

These are ‘more fun’ books.

Importing OpenStack Keystone Users & Tenants into OpenLDAP

If you’ve been following along, we installed OpenLDAP here, that or you have one of your own going. Either way, the goal of this post, is to pull information out of OpenStack Keystone for import into OpenLDAP. Preparing us to transition Keystone backend over to LDAP.

Getting started

To get started, we’ll assume a working LDAP and Keystone setup, and that you are logged into the server(s) hosting each.

Validating connections

Before we get too far into this, let’s validate we’re indeed able to get information from both services.

  • Keystone
# keystone user-list
|                id                |    name    | enabled |       email
| 1ccfc42014e04d4cb1e3818bef855ad0 |   admin    |   True  |   root@localhost
| f9404ed9fd864070a11e0b76671f99c2 | ceilometer |   True  |   heat@localhost
| f7c4d64657ab463cb38812a39b8f8a89 |   cinder   |   True  |  cinder@localhost
  • OpenLDAP
# slapcat
dn: dc=cook,dc=book
objectClass: top
objectClass: dcObject
objectClass: organization

Exporting from keystone to ldif

The ldif format allows us to import easily back into ldap. What follows is a script that will run some keystone commands, do some output cleanup, and then export things into an ldif file. The script is as follows:

echo -n > $LDIF
# Make our OUs
echo "dn: ou=Roles,$SUFFIX" >> $LDIF
echo "objectclass:organizationalunit" >> $LDIF
echo "ou: Roles" >> $LDIF
echo "description: generic groups branch" >> $LDIF
echo -e "\n" >> $LDIF
echo "dn: ou=Users,$SUFFIX" >> $LDIF
echo "objectclass:organizationalunit" >> $LDIF
echo "ou: Users" >> $LDIF
echo "description: generic groups branch" >> $LDIF
echo -e "\n" >> $LDIF
echo "dn: ou=Groups,$SUFFIX" >> $LDIF
echo "objectclass:organizationalunit" >> $LDIF
echo "ou: Groups" >> $LDIF
echo "description: generic groups branch" >> $LDIF
echo -e "\n" >> $LDIF
for line in `keystone role-list | awk '($4 != "name") && ($4 != "") {print $4}'`
    echo "dn: cn=$CN,ou=Roles,$SUFFIX" >> $LDIF
    echo "objectClass: organizationalRole" >> $LDIF
    echo "cn: $CN" >> $LDIF
    echo -e "\n" >> $LDIF
for line in `keystone user-list | awk '($4 != "name") && ($4 != "") {print $4}'`
    echo "dn: cn=$CN,ou=Users,$SUFFIX" >> $LDIF
    echo "objectClass: inetOrgPerson" >> $LDIF
    echo "cn: $CN" >> $LDIF
    echo "sn: cookbook" >> $LDIF
    echo -e "\n" >> $LDIF
for line in `keystone tenant-list | awk '($4 != "name") && ($4 != "") {print $4}'`
    echo "dn: cn=$CN,ou=Groups,$SUFFIX" >> $LDIF
    echo "objectClass: groupOfNames" >> $LDIF
    echo "member: cn=admin,$SUFFIX" >> $LDIF
    echo "cn: $CN" >> $LDIF
    echo -e "\n" >> $LDIF

This assumes you have connectivity between where you are running the script and Keystone.

This should produce a file that looks like this:

dn: cn=cookbook,ou=Groups,dc=cook,dc=book
objectClass: groupOfNames
cn: cookbook
dn: cn=service,ou=Groups,dc=cook,dc=book
objectClass: groupOfNames
cn: service
dn: cn=Member,ou=Roles,dc=cook,dc=book
objectClass: organizationalRole
cn: Member
dn: cn=_member_,ou=Roles,dc=cook,dc=book
objectClass: organizationalRole

You should now be able to use the resulting ldif file to import these things into LDAP