Active/Passive failover cluster on a MySQL Galera Cluster with HAProxy (custom ocf agent)
As promised, I will use a custom ocf agent for managing my HAProxy cluster
In this article I assume :
- 2 node with HAProxy up and running
- On each node pacemaker has been installed
- The 2 nodes are connected but only one is configured with the failover resource (VIP)
- STONITH is disable
- Quorum policy is disable
- Prevent auto failback after a node recovery (resource-stickiness like this:)
$ sudo crm configure rsc_defaults resource-stickiness=100 |
I will only discribed the ocf agent usage.
HAProxy OCF agent setup
I’m using the useful Github project from Russki
$ sudo wget -O /usr/lib/ocf/resource.d/heartbeat/haproxy http://github.com/russki/cluster-agents/raw/master/haproxy |
Change the file permissions:
$ sudo chmod 755 /usr/lib/ocf/resource.d/heartbeat/haproxy |
According to the Github configure your resource like this:
$ sudo crm configure primitive haproxy ocf:heartbeat:haproxy params conffile=/etc/haproxy/haproxy.cfg op monitor interval=10sec |
Be sure that the failover resource and HAProxy will stay on the same node:
$ sudo crm colocation haproxy-with-failover-ip inf: haproxy failover-ip |
Be sure that HAProxy will start after the failover resource
crm(conf-haproxy)configure# order haproxy-after-failover-ip mandatory: failover-ip haproxy |
Using the crm, you should have an output like this:
ubuntu@haproxy-node01:~$ sudo crm_mon -1 |
Enjoy!
Comments