When you manage a large cluster, you do not always know where your OSD are located.
Sometimes you have issues with PG such as unclean
or with OSDs such as slow requests.
While looking at your ceph health detail
you only see where the PGs are acting or on which OSD you have slow requests.
Given that you might have tons of OSDs located on a lot of node, it is not straightforward to find and restart them.
Read On...
Infernalis has just been released a couple of weeks ago and I have to admit that I am really impressed of the work that has been done.
So I am going to present you 5 really handy things that came out with this new release.
Read On...
Quick tip to release the memory that tcmalloc has allocated but which is not being used by the Ceph daemon itself.
Read On...
This article simply relays some recent discovery made around Ceph performance.
The finding behind this story is one of the biggest improvement in Ceph performance that has been seen in years.
So I will just highlight and summarize the study in case you do not want to read it entirely.
Read On...
Quick and simple test to validate if the RBD cache is enabled on your client.
Read On...
Quick tip to determine the location of a file stored on CephFS.
Read On...
Using SSD drives in some part of your cluster might useful.
Specially under read oriented workloads.
Ceph has a mechanism called primary affinity, which allows you to put a higher affinity on your OSDs so they will likely be primary on some PGs.
The idea is to have reads served by the SSDs so clients can get faster reads.
Read On...
The title is probably weird and misleading but I could not find better than this :).
The idea here is to dive a little bit into what the kernel client sees for each client that has a RBD device mapped.
In this article, we are focusing on the Kernel RBD feature.
Read On...