Ceph: managing CRUSH with the CLI
Getting more familiar with the Ceph CLI with CRUSH.
For the purpose of this exercise, I am going to:
- Setup two new racks in my existing infrastructure
- Simply add my current server in them
- Create a new CRUSH rule that uses both racks
Let’s start by creating two new racks:
$ ceph osd crush add-bucket rack1 rack |
As you can see racks are empty (and this normal):
$ ceph osd tree |
Now we assign each host to a specific rack:
$ ceph osd crush move test1 rack=rack1 |
We move both racks into the default root:
$ ceph osd crush move rack2 root=default |
Check the final result:
$ ceph osd tree |
Eventually create a new rule for this placement:
$ ceph osd crush rule create-simple racky default rack |
{ "rule_id": 3,
"rule_name": "racky",
"ruleset": 3,
"type": 1,
"min_size": 1,
"max_size": 10,
"steps": [
{ "op": "take",
"item": -1},
{ "op": "chooseleaf_firstn",
"num": 0,
"type": "rack"},
{ "op": "emit"}]}]
Finally, you can assign a pool to this ruleset:
$ ceph osd pool set rbd crush_ruleset 3 |
Ceph’s CLI is getting more and more powerful. It is good to see that we don’t need to download the CRUSH map, then edit it manually and eventually re-commit it :).
Comments