Quick and efficient Ceph DevStacking
Recently I built a little repository on github/ceph where I put two files to help you building your DevStack Ceph.
$ git clone https://git.openstack.org/openstack-dev/devstack |
Happy DevStacking!
Recently I built a little repository on github/ceph where I put two files to help you building your DevStack Ceph.
$ git clone https://git.openstack.org/openstack-dev/devstack |
Happy DevStacking!
Many years ago I came across a script made by Shawn Moore and Rodney Rymer from Catawba university. The purpose of this tool is to reconstruct a RBD image. Imagine your cluster dead, all the monitors got wiped off and you don’t have backup (I know what can possibly happen?). However all your objects remain intact.
I’ve always wanted to blog about this tool, simply to advocate it and make sure that people can use it. Hopefully it will be a good publicity for this tool :-).
Space reclamation mechanism for the Kernel RBD module. Having this kind of support is really crucial for operators and ease your capacity planing. RBD images are sparse, thus size after creation is equal to 0 MB. The main issue with sparse images is that images grow to eventually reach their entire size. The thing is Ceph doesn’t know anything that this happening on top of that block especially if you have a filesystem. You can easily write the entire filesystem and then delete everything, Ceph will still believe that the block is fully used and will keep that metric. However thanks to the discard support on the block device, the filesystem can send discard flush commands to the block. In the end, the storage will free up blocks.
OSD performance counters tend to stack up and sometimes the value shown is not really representative of the current environment. Thus it is quite useful to reset the counters to get the last values. This feature was add
While playing with Ceph on DevStack I noticed that after several rebuild I ended up with the following error from nova-scheduler:
Secret not found: rbd no secret matches uuid '3092b632-4e9f-40ca-9430-bbf60cefae36'
Actually this error is reported by libvirt itself which somehow keeps the secret in-memory (I believe) even when a new virsh secret is applied. The only solution I have found so far to this issue is to restart libvirt:
$ sudo service libvirt-bin restart |
Quick tip to collect Kernel RBD logs.
Quick tip on how to retrieve cache statistics from the a cache pool.
Introducing the ability to connect DevStack to a remote Ceph cluster. So DevStack won’t bootstrap any Ceph cluster, it will simply connect to a remote one.
A common recommendation is to store OSD journal on a SSD drive which implies loosing your OSD if this journal fails.
This article assumes that your OSDs have been originally deployed with ceph-disk
.
You will also realise that it’s really simple to bring your OSDs back to life after replacing your faulty SSD with a new one.
During some strange circonstances, the levelDB monitor store can start taking up a substancial amount of space. Let quickly see how we can workaround that.