OpenStack: import existing Ceph volumes in Cinder
This method can be useful while migrating from one OpenStack to another.
This method can be useful while migrating from one OpenStack to another.
Ceph, to work in optimal circumstances requires the usage of RAW images. However, it is painful to upload RAW images in Glance because it takes a while. Let see how we can make our life easier.
The OpenStack documentation often recommends to enable the Glance cache while using the default store file
, with the Ceph RBD backend things are slightly different.
Since Juno, it is not possible anymore for an user to create public images nor make one of his images/snapshots public. Even though this new Glance policy is a good initiative, let’s see how we can get the old behavior back.
The next OpenStack summit is just around the corner and as usual Josh Durgin and I will lead the Ceph and OpenStack design session. This session is scheduled for November 3 from 11:40 to 13:10, find the description link here. The etherpad is already available here so don’t hesitate to add your name to the list along with your main subject of interest. See you in Paris!
Save the date (September 18, 2014) and join us at the new edition of the Ceph Days in Paris. I will be talking about the new amazing stuff that happened during this (non-finished yet) Juno cycle. Actually I’ve never seen so many patch sets in one cycle :D. Things are doing well for Ceph in OpenStack! Deploying Ceph with Ansible will be part of the talk as well.
The full schedule is available, don’t forget to register to the event.
Hope to see you there!
Computes with Ceph image backend and computes with local image backend. At some point, you might want to build hypervisor and use their local storage for virtual machine root disks. Using local storage will help you maximasing your IOs and will reduce IO latentcies to the minimum (compare to network block storage). However you will lose handy features like the live-migration (block migration is still an option but slower). Data on the hypervisors will not have a good availability level too. If the compute node crashes the user will not be able to access his virtual machines for a certain amount of time. On another hand, you want to build hypervisors that where virtual machine root disks will live into Ceph. Then you will be able to seemlessly move virtual machine with the live-migration. Virtual machine disks will be highly available so if a compute node crashes you can quickly evacuate the virtual machine disk to another compute node. Ultimately, your goal is to dissociate them, fortunately for you OpenStack provides a mechanism based on host agregate that will help you achieve your objective. Thanks agregate filters you will be able to expose these hypervisors.
Running OpenStack on production can be difficult, so every optimizations are good to take :).
Just back from the Juno summit, I attended most of the storage sessions and was extremely shocked how Ceph was avoided by storage vendors. However LVM, the reference storage backend for Cinder was always mentioned. Maybe, is it a sign that Ceph is taking over? Talking about LVM, the last OpenStack survey showed that it was the more used backend.
Six months have passed since Hong Kong and it is always really exciting to see all the folks from the community gathered all-together in a (bit chilly) convention center. As far I saw from the submitted and accepted talks, Ceph continues its road to the top. There is still a huge growing interest about Ceph. On Tuesday May 13th, Josh and I led a (3 hours long) session to discuss the next steps of the integration of Ceph into OpenStack. To be honest, back when we were in Hong Kong, I believe that we were too optimistic about our roadmap. So this time we decided to be a little more realistic and took a more step-by-step approach rather than “let’s add everything we can”. However, this does not mean that the Icehouse cycle was limited in terms of features, not at all! Indeed the Icehouse cycle has seen some tremendous improvements. I don’t know if you remember my last year article right after the Icehouse summit, there was a feature that I wanted so much: RADOS as a backend for Swift. And yes, we made it, so if you want more details you’d better continue the reading :).