This second edition of the CephDay London looks really promising.
You should definitely look at the agenda! Talks go from OpenStack to deep performance studies and crossing CephFS news!
Check this out on the Eventbrite page.
I hope to see you there! I don’t have any talks, at least for once I’ll be watching :-).
Read On...
A simple benchmark job to determine if your SSD is suitable to act as a journal device for your OSDs.
Read On...
Almost two years ago, I was writing the results of the experiments with Flashcache.
Today this blog post is a featured post, this howto was written by Andrei Mikhailovsky.
Thanks for his contribution to this blog :-).
Read On...
Features for the seventh Ceph release (Giant) have been frozen 3 weeks ago.
Thus Giant is just around the corners and bugs are currently being fixed.
This article is a quick preview on a new feature.
Read On...
Useful to understand benchmark result and Ceph’s second write penalty (this phenomena is explained here in the section I.1).
Read On...
The use case is simple, I want to use both SSD disks and SATA disks within the same machine and ultimately create pools pointing to SSD or SATA disks.
In order to achieve our goal, we need to modify the CRUSH map.
My example has 2 SATA disks and 2 SSD disks on each host and I have 3 hosts in total.
Read On...
A couple a months ago, Dan Mick posted a nice article that introduced the RBD support for iSCSI / TGT.
In this article, I will have a look at it.
Read On...
Quick tip to enable the dynamic subtree tree partitionning with multiple Ceph MDS servers.
Read On...
Moving further on the Software Defined Storage principles, Ceph, with its latest stable version introduced a new mechanism called cache pool tiering.
It brings a really interesting concept that will help us to provide scalable distributed caching.
Read On...
A simple script to bootstrap a Ceph cluster and start playing with it.
The script heavily relies on:
The final machine will contain:
- 1 Monitor
- 3 OSDs
- 1 MDS
- 1 RADOS Gateway
Read On...