I will explain how to set up 3 servers, with the following roles:
server-a = management node (IP 10.1.0.10)
server-b = storage and sql node (IP 10.1.0.11)
server-c = storage and sql node (IP 10.1.0.12)
It is easy to have storage and sql nodes on separate servers, but to make this guide easier to follow (and so we don’t need 5 machines), I’ll use 3 servers.
All servers will need to have mysql-server installed. If you need help doing this, have a look at
this guide.
Setting up the management node (10.1.0.10)
Default debian/ubuntu installs look for /etc/mysql/ndb_mgmd.cnf for the management node, so we need to create this file, with the following contents:
[NDBD DEFAULT]
NoOfReplicas=2
DataDir= /var/lib/mysql-cluster
# Management Node
[NDB_MGMD]
HostName=10.1.0.10
DataDir=/var/lib/mysql-cluster
# Storage Nodes (one for each node)
[NDBD]
HostName=10.1.0.11
DataDir=/var/lib/mysql-cluster
[NDBD]
HostName=10.1.0.12
DataDir=/var/lib/mysql-cluster
# SQL Nodes (one for each node)
[MYSQLD]
HostName=10.1.0.11
[MYSQLD]
HostName=10.1.0.12
The line beginning NoOfReplicas tells the cluster how many copies of data should be kept.
Now we start the management node, and it will sit waiting for connections from the storage and SQL nodes:
/etc/init.d/mysql-ndb-mgm start
Setting up SQL and data nodes
All we need to do here is make a few changes to /etc/mysql/my.cnf (the mysql config file)
First, add the following 2 lines (using the IP of your management node) inside the [mysqld] section of the config file:
MyISAM
ndb-connectstring=10.1.0.10
And near the bottom of the file there will be a section for [MYSQL_CLUSTER] which you will need to uncomment, and change the ndb-connectstring line to your management nodes IP.
If you are using separate data and sql nodes, the [mysqld] part is relevant to the sql nodes only, and the [MYSQL_CLUSTER] part is relevant to the data nodes only.
Before we start the services, we have to create the /var/lib/mysql-cluster directory and set it to be owned by the mysql user:
mkdir /var/lib/mysql-cluster
chown mysql:mysql /var/lib/mysql-cluster
Now we need to start the node services:
/etc/init.d/mysql restart
(which starts the sql node)
/etc/init.d/mysql-ndb restart
(which starts the data node)
By now, everything should be running, so we connect to the management node (by running ndb_mgm from the command line) and check that the other nodes have connected properly (using the show command):
ndb_mgm> show;
Cluster Configuration
———————
[ndbd(NDB)] 2 node(s)
id=2 @10.1.0.11 (Version: 5.0.51, Nodegroup: 0)
id=3 @10.1.0.12 (Version: 5.0.51, Nodegroup: 0, Master)
[ndb_mgmd(MGM)] 1 node(s)
id=1 @10.1.0.10 (Version: 5.0.51)
[mysqld(API)] 2 node(s)
id=4 @10.1.0.11 (Version: 5.0.51)
id=5 @10.1.0.12 (Version: 5.0.51)
This shows that everything has connected properly. If connections are missing, then it’s worth checking /var/log/syslog on the affected server to see if there are any error messages.
Using the cluster
It’s worth noting that any databases already on the servers will continue to work as before. Tables only become part of the cluster when their engine type is changed to ‘MyISAM’ by issuing this command (from a mysql prompt):
alter table tablename engine=MyISAM;
But for now we will create a new database and clustered table, and test that the data is clustered. The cluster setup applies to tables only, not databases, so we first need to create the database on both sql nodes:
create database cluster;
Now, when we create a table inside the cluster database, as long as the engine is MyISAM, the data will be synced across data nodes, which we can test by doing the following (from a mysql prompt on either sql node):
create table clustertest (i int) engine=MyISAM;
insert into clustertest () values (1);
select i from clustertest;
Which should return the single row with the value of 1. Now connect to the other SQL node and try…
insert into clustertest () values (2);
select i from clustertest;
Which should return both rows, which will happen whichever SQL node you connect to as the table is now stored in the cluster.
If you receive an error like:
ERROR 1015 (HY000): Can’t lock file (errno: 4009)
Then it is likely that some of your cluster nodes haven’t started correctly, so its worth checking the management interface again.
Shutting down and restarting the cluster
To shutdown the data nodes and management node, all you need to do is enter the command ‘shutdown’ in the management interface.
To restart, simply run
/etc/init.d/mysql-ndb-mgm start
On the management node, and
/etc/init.d/mysql-ndb start
On the data nodes. The SQL nodes continue running, and can be stopped/started using the standard mysql init script.
That’s it, you’ve now should have a working MySQL cluster. As you have NoOfReplicas=2 in the management config, you should be able to unplug either data node at any time and still have access to all of the clustered tables.