OpenStack Glance NFS and Compute local direct fetch

This feature has been around for quite a while now, if I remember correctly it was introduced in the Grizzly release. However, I never really got the chance to play around with it. Let’s assume that you use NFS to store Glance images, we know that the default booting mechanism implies to fetch the instance image from Glance to the Nova compute. This is basically streaming the image which involves network throughput and makes the boot process longer. OpenStack Nova can be configured to directly access Glance images from a local filesystem path. This is ideal for our NFS scenario.

Read On...

OpenStack guest and watchdog

Libvirt has the ability to configure a watchdog device for QEMU guests. When the guest operating system hangs or crashes the watchdog device is used to automatically trigger some actions. The watchdog support was added in OpenStack Icehouse.

Read On...

Quick and efficient Ceph DevStacking

Recently I built a little repository on github/ceph where I put two files to help you building your DevStack Ceph.

$ git clone https://git.openstack.org/openstack-dev/devstack
$ git clone https://github.com/ceph/ceph-devstack.git
$ cp ceph-devstack/local* devstack
$ cd devstack
$ ./stack.sh


Happy DevStacking!

Read On...

OpenStack: perform consistent snapshots with Qemu Guest Agent

A while back, I wrote an article about taking consistent snapshots of your virtual machines in your OpenStack environment. However this method was really intrusive since it required to be inside the virtual machine and to manually summon a filesystem freeze. In this article, I will use a different approach to achieve the same goal without the need to be inside the virtual machine. The only requirement is to have a virtual machine running the qemu-guest-agent.

Read On...

OpenStack and Ceph: RBD discard

Only Magic Card player might recognize that post picture :) (if you’re interested)


I have been waiting for this for quite a while now. Discard, also called trim (with SSD), is a space reclamation mechanism that allows you to reclaim unused blocks on a disk. RBD images are sparse by default, this means that the space they occupy increase the more you write data (opposite of preallocation). So while writing on your filesystem you might end up to the end of your device. On the Ceph side, no one knows what is happening on the filesystem, so we actually end up with fully allocated blocks… In the end the cluster believes that the RBD images are fully allocated. From an operator perspective, having the ability to reclaim back the space unused by your running instances is really handy.

Read On...

Ceph recover a RBD image from a dead cluster

Many years ago I came across a script made by Shawn Moore and Rodney Rymer from Catawba university. The purpose of this tool is to reconstruct a RBD image. Imagine your cluster dead, all the monitors got wiped off and you don’t have backup (I know what can possibly happen?). However all your objects remain intact.

I’ve always wanted to blog about this tool, simply to advocate it and make sure that people can use it. Hopefully it will be a good publicity for this tool :-).

Read On...

Ceph and KRBD discard

Space reclamation mechanism for the Kernel RBD module. Having this kind of support is really crucial for operators and ease your capacity planing. RBD images are sparse, thus size after creation is equal to 0 MB. The main issue with sparse images is that images grow to eventually reach their entire size. The thing is Ceph doesn’t know anything that this happening on top of that block especially if you have a filesystem. You can easily write the entire filesystem and then delete everything, Ceph will still believe that the block is fully used and will keep that metric. However thanks to the discard support on the block device, the filesystem can send discard flush commands to the block. In the end, the storage will free up blocks.

Read On...