RBD image bigger than your Ceph cluster
Some experiment with gigantic overprovisioned RBD images.
Some experiment with gigantic overprovisioned RBD images.
Some figures from a RADOS bench.
The Ceph developer summit is already behind us and wow! so many good things are around the corner! During this online event, we discussed the future of the Firefly release (planned for February 2014). During the last OpenStack summit in Hong Kong, I had the opportunity to discuss with Sage a new feature that might go into Firefly. This was obviously discussed during the CDS too. His plan is to add a multi-backend functionality for the filestore. And trust me this will definitely bring Ceph to another level.
Memory leaks disappeared and CPU load dramatically reduced. Yay!
Curious? Wanna know who has a RBD device mapped?
Quick how-to on mapping/unmapping a RBD device during startup and shutdown.
Quick script to evaluate the placement of the objects contained in a RBD image.
|
The goal of this little analysis was to determine the overhead generated by Ceph. One important point was also to estimate the deviance brought by Ceph between RAW IOs from disk and Ceph IOs.
A quick post to tell everybody that I’ll be giving a presentation at the Ceph day in London. Obviously, I’ll be representing the company I work for, eNovance. The event is next Wednesday, October 9, 2013. I am going to talk about Ceph, performance and benchmarking. You can check the details and schedule of the event here. See you there!
Update: slides available through my employer’s blog (eNovance).
Today the CoreOS team released its first OpenStack image. Let’s quickly see how we can take advantage of it.