Bootstrap two Ceph and configure RBD mirror using Ceph Ansible
Since Jewel is out everyone wants to try the new RBD-mirroring feature.
Since Jewel is out everyone wants to try the new RBD-mirroring feature.
As presented in my preview of BlueStore, this new store has the ability tp be configured with multiple devices.
Since the ceph-disk
utility does not support configuring multiple devices, OSD must be configured manually.
Let’s see how we can configure this.
Another feature preview for Jewel. NBD driver for RBD that allows librbd to present a kernel-level block device
The RBD mirroring feature will be available for the next stable Ceph release: Jewel.
A new way to efficiently store objects using BlueStore.
Get the modification time of a RBD image.
Following last week article, here is another CRUSH example. This time we want to store our first replica on SSD drives and the second copy on SATA drives.
Quick CRUSH example on how to store 3 replicas, two in rack number 1 and the third one in rack number 2.
Sometimes removing OSD, if not done properly can result in double rebalancing. The best practice to remove an OSD involves changing the crush weight to 0.0 as first step.
Ceph just moved outside of DevStack in order to comply with the new DevStack’s plugin policy. The code can be found on github. We now have the chance to be on OpenStack Gerrit as well and thus brings all the good things from the OpenStack infra (a CI).
To use it simply create a localrc
file with the following:
enable_plugin ceph https://github.com/openstack/devstack-plugin-ceph
A more complete
localrc
file can be found on Github.