How To Set Up A Load-Balanced MySQL Cluster --Part 1
How To Set Up A Load-Balanced MySQL Cluster
Version 1.0 Author: Falko Timme Last edited 03/27/2006
This tutorial shows how to configure a MySQL 5 cluster with three nodes: two storage nodes and one management node. This cluster is load-balanced by a high-availability load balancer that in fact has two nodes that use the Ultra Monkey package which provides heartbeat (for checking if the other node is still alive) and ldirectord (to split up the requests to the nodes of the MySQL cluster).
In this document I use Debian Sarge for all nodes. Therefore the setup might differ a bit for other distributions. The MySQL version I use in this setup is 5.0.19. If you do not want to use MySQL 5, you can use MySQL 4.1 as well, although I haven't tested it.
This howto is meant as a practical guide; it does not cover the theoretical backgrounds. They are treated in a lot of other documents in the web.
This document comes without warranty of any kind! I want to say that this is not the only way of setting up such a system. There are many ways of achieving this goal but this is the way I take. I do not issue any guarantee that this will work for you!
1 My Servers
I use the following Debian servers that are all in the same network (192.168.0.x in this example): sql1.example.com: 192.168.0.101 MySQL cluster node 1 sql2.example.com: 192.168.0.102 MySQL cluster node 2 loadb1.example.com: 192.168.0.103 Load Balancer 1/MySQL cluster management server loadb2.example.com: 192.168.0.104 Load Balancer 2
In addition to that we need a virtual IP address : 192.168.0.105. It will be assigned to the MySQL cluster by the load balancer so that applications have a single IP address to access the cluster.
Although we want to have two MySQL cluster nodes in our MySQL cluster, we still need a third node, the MySQL cluster management server, for mainly one reason: if one of the two MySQL cluster nodes fails, and the management server is not running, then the data on the two cluster nodes will become inconsistent ("split brain"). We also need it for configuring the MySQL cluster.
So normally we would need five machines for our setup:
2 MySQL cluster nodes + 1 cluster management server + 2 Load Balancers = 5
As the MySQL cluster management server does not use many resources, and the system would just sit there doing nothing, we can put our first load balancer on the same machine, which saves us one machine, so we end up with four machines.
2 Set Up The MySQL Cluster Management Server
First we have to download MySQL 5.0.19 (the max version!) and install the cluster management server (ndb_mgmd) and the cluster management client (ndb_mgm - it can be used to monitor what's going on in the cluster). The following steps are carried out on loadb1.example.com (192.168.0.103):
loadb1.example.com:
mkdir/usr/src/mysql-mgmcd/usr/src/mysql-mgmwget http://dev.mysql.com/get/Downloads/MySQL-5.0/mysql-max-5.0.19-linux-i686-\glibc23.tar.gz/from/http://www.mirrorservice.org/sites/ftp.mysql.com/tar xvfz mysql-max-5.0.19-linux-i686-glibc23.tar.gzcd mysql-max-5.0.19-linux-i686-glibc23mv bin/ndb_mgm/usr/binmv bin/ndb_mgmd/usr/binchmod 755/usr/bin/ndb_mg*cd/usr/srcrm -rf/usr/src/mysql-mgm
Next, we must create the cluster configuration file,/var/lib/mysql-cluster/config.ini:
loadb1.example.com:
mkdir/var/lib/mysql-clustercd/var/lib/mysql-clustervi config.ini
Please replace the IP addresses in the file appropriately.
Then we start the cluster management server:
loadb1.example.com:
ndb_mgmd -f/var/lib/mysql-cluster/config.ini
It makes sense to automatically start the management server at system boot time, so we create a very simple init script and the appropriate startup links:
loadb1.example.com:
echo 'ndb_mgmd -f/var/lib/mysql-cluster/config.ini' >/etc/init.d/ndb_mgmdchmod 755/etc/init.d/ndb_mgmdupdate-rc.d ndb_mgmd defaults
Version 1.0 Author: Falko Timme
This tutorial shows how to configure a MySQL 5 cluster with three nodes: two storage nodes and one management node. This cluster is load-balanced by a high-availability load balancer that in fact has two nodes that use the Ultra Monkey package which provides heartbeat (for checking if the other node is still alive) and ldirectord (to split up the requests to the nodes of the MySQL cluster).
In this document I use Debian Sarge for all nodes. Therefore the setup might differ a bit for other distributions. The MySQL version I use in this setup is 5.0.19. If you do not want to use MySQL 5, you can use MySQL 4.1 as well, although I haven't tested it.
This howto is meant as a practical guide; it does not cover the theoretical backgrounds. They are treated in a lot of other documents in the web.
This document comes without warranty of any kind! I want to say that this is not the only way of setting up such a system. There are many ways of achieving this goal but this is the way I take. I do not issue any guarantee that this will work for you!
1 My Servers
I use the following Debian servers that are all in the same network (192.168.0.x in this example):
In addition to that we need a virtual IP address : 192.168.0.105. It will be assigned to the MySQL cluster by the load balancer so that applications have a single IP address to access the cluster.
Although we want to have two MySQL cluster nodes in our MySQL cluster, we still need a third node, the MySQL cluster management server, for mainly one reason: if one of the two MySQL cluster nodes fails, and the management server is not running, then the data on the two cluster nodes will become inconsistent ("split brain"). We also need it for configuring the MySQL cluster.
So normally we would need five machines for our setup:
2 MySQL cluster nodes + 1 cluster management server + 2 Load Balancers = 5
As the MySQL cluster management server does not use many resources, and the system would just sit there doing nothing, we can put our first load balancer on the same machine, which saves us one machine, so we end up with four machines.
2 Set Up The MySQL Cluster Management Server
First we have to download MySQL 5.0.19 (the max version!) and install the cluster management server (ndb_mgmd) and the cluster management client (ndb_mgm - it can be used to monitor what's going on in the cluster). The following steps are carried out on loadb1.example.com (192.168.0.103):
loadb1.example.com:
mkdir/usr/src/mysql-mgmcd/usr/src/mysql-mgmwget http://dev.mysql.com/get/Downloads/MySQL-5.0/mysql-max-5.0.19-linux-i686-\glibc23.tar.gz/from/http://www.mirrorservice.org/sites/ftp.mysql.com/tar xvfz mysql-max-5.0.19-linux-i686-glibc23.tar.gzcd mysql-max-5.0.19-linux-i686-glibc23mv bin/ndb_mgm/usr/binmv bin/ndb_mgmd/usr/binchmod 755/usr/bin/ndb_mg*cd/usr/srcrm -rf/usr/src/mysql-mgm
Next, we must create the cluster configuration file,/var/lib/mysql-cluster/config.ini:
loadb1.example.com:
mkdir/var/lib/mysql-clustercd/var/lib/mysql-clustervi config.ini
[NDBD DEFAULT]
NoOfReplicas=2
[MYSQLD DEFAULT]
[NDB_MGMD DEFAULT]
[TCP DEFAULT]
# Section for the cluster management node
[NDB_MGMD]
# IP address of the management node (this system)
HostName=192.168.0.103
# Section for the storage nodes
[NDBD]
# IP address of the first storage node
HostName=192.168.0.101
DataDir= /var/lib/mysql-cluster
[NDBD]
# IP address of the second storage node
HostName=192.168.0.102
DataDir=/var/lib/mysql-cluster
# one [MYSQLD] per storage node
[MYSQLD]
[MYSQLD]
Please replace the IP addresses in the file appropriately.
Then we start the cluster management server:
loadb1.example.com:
ndb_mgmd -f/var/lib/mysql-cluster/config.ini
It makes sense to automatically start the management server at system boot time, so we create a very simple init script and the appropriate startup links:
loadb1.example.com:
echo 'ndb_mgmd -f/var/lib/mysql-cluster/config.ini' >/etc/init.d/ndb_mgmdchmod 755/etc/init.d/ndb_mgmdupdate-rc.d ndb_mgmd defaults