Features for the seventh Ceph release (Giant) have been frozen 3 weeks ago.
Thus Giant is just around the corners and bugs are currently being fixed.
This article is a quick preview on a new feature.
Read On...
Useful to understand benchmark result and Ceph’s second write penalty (this phenomena is explained here in the section I.1).
Read On...
Save the date (September 18, 2014) and join us at the new edition of the Ceph Days in Paris.
I will be talking about the new amazing stuff that happened during this (non-finished yet) Juno cycle.
Actually I’ve never seen so many patch sets in one cycle :D.
Things are doing well for Ceph in OpenStack!
Deploying Ceph with Ansible will be part of the talk as well.
The full schedule is available, don’t forget to register to the event.
Hope to see you there!
Read On...
Computes with Ceph image backend and computes with local image backend.
At some point, you might want to build hypervisor and use their local storage for virtual machine root disks.
Using local storage will help you maximasing your IOs and will reduce IO latentcies to the minimum (compare to network block storage).
However you will lose handy features like the live-migration (block migration is still an option but slower).
Data on the hypervisors will not have a good availability level too.
If the compute node crashes the user will not be able to access his virtual machines for a certain amount of time.
On another hand, you want to build hypervisors that where virtual machine root disks will live into Ceph.
Then you will be able to seemlessly move virtual machine with the live-migration.
Virtual machine disks will be highly available so if a compute node crashes you can quickly evacuate the virtual machine disk to another compute node.
Ultimately, your goal is to dissociate them, fortunately for you OpenStack provides a mechanism based on host agregate that will help you achieve your objective.
Thanks agregate filters you will be able to expose these hypervisors.
Read On...
The use case is simple, I want to use both SSD disks and SATA disks within the same machine and ultimately create pools pointing to SSD or SATA disks.
In order to achieve our goal, we need to modify the CRUSH map.
My example has 2 SATA disks and 2 SSD disks on each host and I have 3 hosts in total.
Read On...
Running OpenStack on production can be difficult, so every optimizations are good to take :).
Read On...
While running a playbook on a host you can request some information about other nodes.
Read On...
A couple a months ago, Dan Mick posted a nice article that introduced the RBD support for iSCSI / TGT.
In this article, I will have a look at it.
Read On...
Quick tip to enable the dynamic subtree tree partitionning with multiple Ceph MDS servers.
Read On...
Moving further on the Software Defined Storage principles, Ceph, with its latest stable version introduced a new mechanism called cache pool tiering.
It brings a really interesting concept that will help us to provide scalable distributed caching.
Read On...