Previous Episodes:
- MySQL-Docker operations. - Part 1: Getting started with MySQL in Docker
- MySQL-Docker operations. - Part 2: Customizing MySQL in Docker
With the material covered in the first two articles, we have all the elements needed to set up replication in Docker. We just need to put the pieces together.
If you want to do everything by hand, it will only take a few minutes. The steps are not complicated. If you have followed the reasoning in the past episodes, you will know what to do.
Or, you can make your life easier by using the ready-made scripts available in Github as MySQL replication samples. In fact, what this article will do in practice is adding comments to a stripped down version of the deployment script, which will make things clear.
First, we will use two templates for the configuration files. Notice that the server-id value is replaced by a placeholder. We will also use a smaller options file for the client's username and password.
$ cat my-template.cnf
[mysqld]
user = mysql
port = 3306
log-bin = mysql-bin
relay-log = mysql-relay
server-id = _SERVERID_
master-info-repository=table
relay-log-info-repository=table
gtid_mode=ON
enforce-gtid-consistency
$ cat node-my.cnf
[client]
user=root
password=secret
First phase: deploying the containers
Note: Don't try to copy-and-paste the code below. It is a reduced version that is used here only for the sake of commenting it. I have deliberately added line numbers to make copy-and-paste harder. Take the code from github instead. 1 #!/bin/bash
2 DOCKER_TMP=/opt/docker/tmp
3 DOCKER_DATA=/opt/docker/mysql
4 MYSQL_IMAGE=mysql/mysql-server
5 NUM_NODES=3
6
7 mkdir -p $DOCKER_DATA
8 sudo chown -R mysql $DOCKER_DATA
9 sudo chgrp -R mysql $DOCKER_DATA
10
We set some defaults. Then we start creating 3 nodes.
11 for NODE in $( seq 1 $NUM_NODES )
12 do
13 export NODE
14 echo "# $NODE"
15 sed "s/_SERVERID_/${NODE}00/" < my-template.cnf > $DOCKER_TMP/my_$NODE.cnf
Here we change the placeholder with the real server-id. It will be 100, 200, 300, ...
16 cp home_my.cnf $DOCKER_TMP/home_my_$NODE.cnf
17 echo "[mysql]" >> $DOCKER_TMP/home_my_$NODE.cnf
18 echo "prompt='node$NODE [\\h] {\\u} (\\d) > '" >> $DOCKER_TMP/home_my_$NODE.cnf
For practical purposes, and to facilitate debug operations, we put all the files inside $DOCKER_TMP. We also add a customized prompt to the password file.
19 DATA_OPTION="-v $DOCKER_DATA/node_$NODE:/var/lib/mysql"
This is the data directory, which is mapped to a file in the host computer. Notice that this option should be only used on Linux. The script in github knows when to enable this feature.
20 echo ""
21 echo "# Deploying $MYSQL_IMAGE into container mysql-node$NODE"
22 docker run --name mysql-node$NODE \
23 -v $DOCKER_TMP/my_$NODE.cnf:/etc/my.cnf \
24 -v $DOCKER_TMP/home_my_$NODE.cnf:/root/home_my.cnf \
25 -e MYSQL_ROOT_PASSWORD=secret \
26 $DATA_OPTION \
27 -d $MYSQL_IMAGE
28 if [ "$?" != "0" ] ; then exit 1; fi
29 done
30
The container creation uses all the elements seen so far: - A volume for the data directory (lines 19 and 26);
- A customized options file (line 23);
- A password file for the user inside the container (line 24).
Second phase: checking that the containers are ready for use
Next, we check if the containers are working.
31 function is_ready
32 {
33 NODE=$1
34 MYSQL="docker exec -it mysql-node$NODE mysql --defaults-file=/root/home_my.cnf "
35 # 'docker exec' leaves a trailing newline in the result
36 READY=$($MYSQL -BN -e 'select 12345' | tr -d '\r')
37 if [ "$READY" == "12345" ]
38 then
39 echo OK
40 fi
41 }
42
We define an easy to call function, which returns "OK" when the container is running properly. Notice that it uses the password file in the temporary location to get a clean result from the mysql client. Also notice that we filter the result to remove the extra linefeed, as explained in part 2.
43 echo "# Waiting for nodes to be ready"
44 sleep 10
45
46 for NODE in $( seq 1 $NUM_NODES )
47 do
48 MAX_ATTEMPTS=30
49 ATTEMPTS=0
50 node_ready=''
51 echo "# Checking container mysql-node$NODE"
52 while [ "$node_ready" != "OK" ]
53 do
54 ATTEMPTS=$(($ATTEMPTS+1))
55 if [[ $ATTEMPTS -gt $MAX_ATTEMPTS ]]
56 then
57 echo "## Maximum number of attempts exceeded "
58 exit 1
59 fi
60 node_ready=$(is_ready $NODE)
61 echo "# NODE $NODE - $ATTEMPTS - $node_ready"
62 sleep 1
63 done
64 echo ''
65 done
This loop is simpler than it looks. It connects to the database server of every node, to see if it has already been initialized. It aborts if the server does not answer within 30 attempts. Third phase: customizing the containers
67 for NODE in $( seq 1 $NUM_NODES )
68 do
69 echo '#!/bin/bash' > n$NODE
70 echo "docker exec -it mysql-node$NODE mysql \"\$@\"" > n$NODE
71 chmod +x n$NODE
For every node we create a simple script (n1, n2, n3 ...) which can connect to the MySQL database.
72 docker exec -it mysql-node$NODE cp /root/home_my.cnf /root/.my.cnf
73 done
74
75 ./set-replication.sh
In each node, we copy the temporary password and prompt file to its intended location. From now on, you can invoke 'mysql' in each container without a password.Fourth phase: configuring the master
To set up replication, we have a separate script (just a matter of convenience), which starts by setting up the master 1 #!/bin/bash
2 MASTER_NODE=1
3 NUM_NODES=3
4
5 MASTER_IP=$(docker inspect --format '{{ .NetworkSettings.IPAddress}}' mysql-node$MASTER_NODE)
6 echo "master IP: $MASTER_IP"
7 MASTER_PORT=3306
8
We need to determine the IP address of the master. Although we can connect to the master using docker exec, the slaves must connect to the master via TCP/IP, as they are in different containers. We have seen the docker inspect command in the first article.
9 MASTER="docker exec -it mysql-node1 mysql"
10 $MASTER -e 'select @@hostname as MASTER, @@server_id, @@server_uuid'
11
12 echo "# Creating replication user in the master"
13 $MASTER -ve 'create user rdocker identified by "rdocker"'
14 $MASTER -ve 'grant replication slave on *.* to rdocker'
We need to create a user for replication, as we don't want to use 'root' for that.
15
16 $MASTER -e 'reset master'
17
We reset the master, as we don't want any side effects from the installation to trickle over to the slaves. This command makes sure that GTID starts from 1, and there are no unwanted events in the binary log. If we don't do that, we need to get binary log and position and start replication with two additional parameters.Fifth phase: configuring the slaves
Now for the real replication settings. We tell each slave who's the master. 18 for SLAVE_NODE in $(seq 2 $NUM_NODES)
19 do
20 SLAVE_PORT=3306
21 SLAVE="docker exec -it mysql-node$SLAVE_NODE mysql "
22
23 echo "# Setting up replication"
24 $SLAVE -e 'reset master'
For the same reason stated above, and to improve clarity in SHOW SLAVE STATUS, we clean up the slave binary logs.
25 $SLAVE -e "select @@hostname as SLAVE_$SLAVE_NODE, @@server_id, @@server_uuid"
26
27 $SLAVE -ve "change master to master_host='$MASTER_IP', \
28 master_port=$MASTER_PORT, \
28 master_user='rdocker', \
30 master_password='rdocker', \
31 MASTER_AUTO_POSITION=1"
32 $SLAVE -ve 'START SLAVE'
33 $SLAVE -e 'SHOW SLAVE STATUS\G' | grep 'Running:'
34 done
The setup command uses the master IP that we have got at line 5, and the newly created replication user. The last two commands will start replication and show if it is running.Sixth phase: checking that replication is running correctly.
To check if replication is functioning properly, we use one of the methods shown in MySQL Replication monitoring 101: the sentinel data. Remember that we have a script named n1 for the master and n2/n3 for the slaves.$ ./n1 test -e 'create table t1 (i int not null primary key, msg varchar(50), ts timestamp)'
./n1 test -e "insert into t1 values (1, 'hello docker', null)"
First, we insert something in the master.$ for N in 1 2 3; do ./n$N -e 'select @@server_id,t1.* from t1' test; done
+-------------+---+--------------+---------------------+
| @@server_id | i | msg | ts |
+-------------+---+--------------+---------------------+
| 100 | 1 | hello docker | 2015-11-01 04:59:21 |
+-------------+---+--------------+---------------------+
+-------------+---+--------------+---------------------+
| @@server_id | i | msg | ts |
+-------------+---+--------------+---------------------+
| 200 | 1 | hello docker | 2015-11-01 04:59:21 |
+-------------+---+--------------+---------------------+
+-------------+---+--------------+---------------------+
| @@server_id | i | msg | ts |
+-------------+---+--------------+---------------------+
| 300 | 1 | hello docker | 2015-11-01 04:59:21 |
+-------------+---+--------------+---------------------+
Then, we retrieve the information from all nodes. As you can see, replication is working.And we can also check that GTID is being used properly:
$ for N in 1 2 3; do ./n$N -e 'select @@server_id, @@global.gtid_executed' ; done
+-------------+------------------------------------------+
| @@server_id | @@global.gtid_executed |
+-------------+------------------------------------------+
| 100 | 7561a381-8054-11e5-9dbe-0242ac11001a:1-2 |
+-------------+------------------------------------------+
+-------------+------------------------------------------+
| @@server_id | @@global.gtid_executed |
+-------------+------------------------------------------+
| 200 | 7561a381-8054-11e5-9dbe-0242ac11001a:1-2 |
+-------------+------------------------------------------+
+-------------+------------------------------------------+
| @@server_id | @@global.gtid_executed |
+-------------+------------------------------------------+
| 300 | 7561a381-8054-11e5-9dbe-0242ac11001a:1-2 |
+-------------+------------------------------------------+
Seventh phase: removing the containers.
When you have many containers to deal with, it will be useful to remember that you can get the list of all containers with docker ps -qa. By combining this informational command with the execution of docker stop and docker rm we can remove the containers very quickly.$ docker stop $(docker ps -qa)
3ef64c4584ec
a36037e2df46
995a6192c450
$ docker rm $(docker ps -qa)
3ef64c4584ec
a36037e2df46
995a6192c450
Caution with many containers
When you deploy many containers, you should remember that they will take storage in your server. Some of them even after being removed. Id you don't use an external volume for the data directory, the volume will be created for you in Docker's user space (/var/lib/docker/volumes.)
If you are using a VM for Docker emulation (as it's the case in OSX and Windows) the available storage space will reduce quickly. But also when running on Linux you may find the available storage to shrink if you have got carried away creating and destroying containers.
Don't forget to monitor storage and clean up on unused volumes.
A final example: deploying 10 nodes
The scripts in github: mysql replication samples allow you to create many containers in a very short time. Here is a sample run:$ time ./deploy-nodes.sh 10
# Docker version ok
# 1
# Deploying mysql/mysql-server into container mysql-node1
272e68f6f93a2542e5c1c60a4b71d5e191dd371d06234738b407d96a06a9037b
# 2
# Deploying mysql/mysql-server into container mysql-node2
aa1b8dcb42ba545d82ec59daef1135fd4f3eef49971d8e3eed0fa3fe9f4d2033
# 3
# Deploying mysql/mysql-server into container mysql-node3
c8b3b29498aa1140a67c109aa30a0877413a3562324fe194a59b9690cd65b2c3
# 4
# Deploying mysql/mysql-server into container mysql-node4
325de5a756c528cd9ad7569780f27c267f039587b02b8e959bab885b9932aa14
# 5
# Deploying mysql/mysql-server into container mysql-node5
7271c13282c62a08e442878ad6db7dc568879fdccfeea80d8c44d2acc3839f9a
# 6
# Deploying mysql/mysql-server into container mysql-node6
d72650b2e33ab0691d230a9f2581b660ac722ceb5a82838f3d198b6d24701283
# 7
# Deploying mysql/mysql-server into container mysql-node7
75ba2d06bddf59d5307b3cade7ba38522ca30db02791096e654867aa5ed18ce7
# 8
# Deploying mysql/mysql-server into container mysql-node8
a9afdcaa17ba904ce5152aeca395d54bdf30b1af1aabdf427431b4f18634813c
# 9
# Deploying mysql/mysql-server into container mysql-node9
aacadd581dfb2c5cd93e53553f323fd8901521144bcc29e71e148463a570a92b
# 10
# Deploying mysql/mysql-server into container mysql-node10
dcdc62d37c947be35a916b6454f6d56a2640e4a5f09c3a6a9a876f286a0400aa
# Waiting for nodes to be ready
# Sleeping 10 seconds ...
....5....10
# Checking container mysql-node1
# NODE 1 - 1 -
# NODE 1 - 2 -
# NODE 1 - 3 - OK
# Checking container mysql-node2
# NODE 2 - 1 - OK
# Checking container mysql-node3
# NODE 3 - 1 - OK
# Checking container mysql-node4
# NODE 4 - 1 - OK
# Checking container mysql-node5
# NODE 5 - 1 - OK
# Checking container mysql-node6
# NODE 6 - 1 - OK
# Checking container mysql-node7
# NODE 7 - 1 - OK
# Checking container mysql-node8
# NODE 8 - 1 - OK
# Checking container mysql-node9
# NODE 9 - 1 - OK
# Checking container mysql-node10
# NODE 10 - 1 - OK
master IP: 172.17.0.45
+--------------+-------------+--------------------------------------+
| MASTER | @@server_id | @@server_uuid |
+--------------+-------------+--------------------------------------+
| 272e68f6f93a | 100 | a1dca326-8057-11e5-b9be-0242ac11002d |
+--------------+-------------+--------------------------------------+
# Creating replication user in the master
--------------
create user rdocker identified by "rdocker"
--------------
--------------
grant replication slave on *.* to rdocker
--------------
--------------
grant select on performance_schema.global_variables to rdocker
--------------
--------------
grant select on performance_schema.session_variables to rdocker
--------------
slave: 172.17.0.46
# Setting up replication
+--------------+-------------+--------------------------------------+
| SLAVE_2 | @@server_id | @@server_uuid |
+--------------+-------------+--------------------------------------+
| aa1b8dcb42ba | 200 | a21a28ba-8057-11e5-b9f0-0242ac11002e |
+--------------+-------------+--------------------------------------+
--------------
change master to master_host='172.17.0.45', master_port=3306, master_user='rdocker', master_password='rdocker', MASTER_AUTO_POSITION=1
--------------
--------------
START SLAVE
--------------
Slave_IO_Running: Connecting
Slave_SQL_Running: Yes
slave: 172.17.0.47
# Setting up replication
+--------------+-------------+--------------------------------------+
| SLAVE_3 | @@server_id | @@server_uuid |
+--------------+-------------+--------------------------------------+
| c8b3b29498aa | 300 | a2580098-8057-11e5-b9ac-0242ac11002f |
+--------------+-------------+--------------------------------------+
--------------
change master to master_host='172.17.0.45', master_port=3306, master_user='rdocker', master_password='rdocker', MASTER_AUTO_POSITION=1
--------------
--------------
START SLAVE
--------------
Slave_IO_Running: Connecting
Slave_SQL_Running: Yes
slave: 172.17.0.48
# Setting up replication
+--------------+-------------+--------------------------------------+
| SLAVE_4 | @@server_id | @@server_uuid |
+--------------+-------------+--------------------------------------+
| 325de5a756c5 | 400 | a2963b29-8057-11e5-b93a-0242ac110030 |
+--------------+-------------+--------------------------------------+
--------------
change master to master_host='172.17.0.45', master_port=3306, master_user='rdocker', master_password='rdocker', MASTER_AUTO_POSITION=1
--------------
--------------
START SLAVE
--------------
Slave_IO_Running: Connecting
Slave_SQL_Running: Yes
slave: 172.17.0.49
# Setting up replication
+--------------+-------------+--------------------------------------+
| SLAVE_5 | @@server_id | @@server_uuid |
+--------------+-------------+--------------------------------------+
| 7271c13282c6 | 500 | a2d750be-8057-11e5-b9f8-0242ac110031 |
+--------------+-------------+--------------------------------------+
--------------
change master to master_host='172.17.0.45', master_port=3306, master_user='rdocker', master_password='rdocker', MASTER_AUTO_POSITION=1
--------------
--------------
START SLAVE
--------------
Slave_IO_Running: Connecting
Slave_SQL_Running: Yes
slave: 172.17.0.50
# Setting up replication
+--------------+-------------+--------------------------------------+
| SLAVE_6 | @@server_id | @@server_uuid |
+--------------+-------------+--------------------------------------+
| d72650b2e33a | 600 | a32ceec7-8057-11e5-b959-0242ac110032 |
+--------------+-------------+--------------------------------------+
--------------
change master to master_host='172.17.0.45', master_port=3306, master_user='rdocker', master_password='rdocker', MASTER_AUTO_POSITION=1
--------------
--------------
START SLAVE
--------------
Slave_IO_Running: Connecting
Slave_SQL_Running: Yes
slave: 172.17.0.51
# Setting up replication
+--------------+-------------+--------------------------------------+
| SLAVE_7 | @@server_id | @@server_uuid |
+--------------+-------------+--------------------------------------+
| 75ba2d06bddf | 700 | a35afddb-8057-11e5-b8b2-0242ac110033 |
+--------------+-------------+--------------------------------------+
--------------
change master to master_host='172.17.0.45', master_port=3306, master_user='rdocker', master_password='rdocker', MASTER_AUTO_POSITION=1
--------------
--------------
START SLAVE
--------------
Slave_IO_Running: Connecting
Slave_SQL_Running: Yes
slave: 172.17.0.52
# Setting up replication
+--------------+-------------+--------------------------------------+
| SLAVE_8 | @@server_id | @@server_uuid |
+--------------+-------------+--------------------------------------+
| a9afdcaa17ba | 800 | a3a6525e-8057-11e5-ba7b-0242ac110034 |
+--------------+-------------+--------------------------------------+
--------------
change master to master_host='172.17.0.45', master_port=3306, master_user='rdocker', master_password='rdocker', MASTER_AUTO_POSITION=1
--------------
--------------
START SLAVE
--------------
Slave_IO_Running: Connecting
Slave_SQL_Running: Yes
slave: 172.17.0.53
# Setting up replication
+--------------+-------------+--------------------------------------+
| SLAVE_9 | @@server_id | @@server_uuid |
+--------------+-------------+--------------------------------------+
| aacadd581dfb | 900 | a3f8e358-8057-11e5-b9cd-0242ac110035 |
+--------------+-------------+--------------------------------------+
--------------
change master to master_host='172.17.0.45', master_port=3306, master_user='rdocker', master_password='rdocker', MASTER_AUTO_POSITION=1
--------------
--------------
START SLAVE
--------------
Slave_IO_Running: Connecting
Slave_SQL_Running: Yes
slave: 172.17.0.54
# Setting up replication
+--------------+-------------+--------------------------------------+
| SLAVE_10 | @@server_id | @@server_uuid |
+--------------+-------------+--------------------------------------+
| dcdc62d37c94 | 1000 | a4135f4d-8057-11e5-ba26-0242ac110036 |
+--------------+-------------+--------------------------------------+
--------------
change master to master_host='172.17.0.45', master_port=3306, master_user='rdocker', master_password='rdocker', MASTER_AUTO_POSITION=1
--------------
--------------
START SLAVE
--------------
Slave_IO_Running: Connecting
Slave_SQL_Running: Yes
real 0m28.539s
user 0m1.999s
sys 0m0.944s
The total time was less than 30 seconds, of which about 13 were spent waiting for the containers to come online. It's a remarkable achievement, which beats traditional deployment schemes hands down.What's next
Now that we have a practical idea of what it takes to deploy containers, we can take a breath and face some philosophical considerations about the contenders to the virtualization scene.In Part 4 we will see some pros and cons of using virtual machines, containers, and sandboxes.
No comments:
Post a Comment