Ceph: monitor store taking up a lot of space
During some strange circonstances, the levelDB monitor store can start taking up a substancial amount of space. Let quickly see how we can workaround that.
During some strange circonstances, the levelDB monitor store can start taking up a substancial amount of space. Let quickly see how we can workaround that.
This second edition of the CephDay London looks really promising. You should definitely look at the agenda! Talks go from OpenStack to deep performance studies and crossing CephFS news!
Check this out on the Eventbrite page.
I hope to see you there! I don’t have any talks, at least for once I’ll be watching :-).
A simple benchmark job to determine if your SSD is suitable to act as a journal device for your OSDs.
Almost two years ago, I was writing the results of the experiments with Flashcache. Today this blog post is a featured post, this howto was written by Andrei Mikhailovsky. Thanks for his contribution to this blog :-).
Features for the seventh Ceph release (Giant) have been frozen 3 weeks ago. Thus Giant is just around the corners and bugs are currently being fixed. This article is a quick preview on a new feature.
Useful to understand benchmark result and Ceph’s second write penalty (this phenomena is explained here in the section I.1).
The use case is simple, I want to use both SSD disks and SATA disks within the same machine and ultimately create pools pointing to SSD or SATA disks. In order to achieve our goal, we need to modify the CRUSH map. My example has 2 SATA disks and 2 SSD disks on each host and I have 3 hosts in total.
A couple a months ago, Dan Mick posted a nice article that introduced the RBD support for iSCSI / TGT. In this article, I will have a look at it.
Quick tip to enable the dynamic subtree tree partitionning with multiple Ceph MDS servers.
For quite some time, Ceph has an admin API. This article demonstrates and gives some hints to monitor Ceph.