In the previous post about the dbdeployer recipes we saw the basics of using the cookbook
command and the simpler tutorials that the recipes offer.
Here we will see some more advanced techniques, and more demanding examples.
We saw that the recipe for a single deployment would get a NOTFOUND
when no versions were available, or the highest MySQL version when one was found.
$ dbdeployer cookbook show single | grep version=
version=$1
[ -z "$version" ] && version=8.0.16
But what if we want the latest Percona Server or MariaDB for this recipe? One solution would be to run the script with an argument, but we can ask dbdeployer to find the most recent version for a given flavor and use it in our recipe:
$ dbdeployer cookbook show single --flavor=percona | grep version=
version=$1
[ -z "$version" ] && version=ps8.0.15
$ dbdeployer cookbook show single --flavor=pxc | grep version=
version=$1
[ -z "$version" ] && version=pxc5.7.25
$ dbdeployer cookbook show single --flavor=mariadb | grep version=
version=$1
[ -z "$version" ] && version=ma10.4.3
This works for all the recipes that don’t require a given flavor. When one is indicated (see dbdeployer cookbook list
) you can override it using --flavor
, but do that at your own risk. Running the ndb
recipe using pxc
flavor won’t produce anything usable.
Replication between sandboxes
When I proposed dbdeployer support for NDB, the immediate reaction was that this was good to test cluster-to-cluster replication. Although I did plenty of such topologies in one of my previous jobs, I had limited experience replicating between single or composite sandboxes. Thus, I started thinking about how to do it. In the old MySQL-Sandbox, I had an option --slaveof
that allowed a single sandbox to replicate from an existing one. I did not implement the same thing in dbdeployer, because that solution looked limited, and only useful in a few scenarios.
I wanted something more dynamic, and initially I thought of creating a grandiose scheme, involving custom templates and user-defined fillers. While I may end up doing that some day, I quickly realized that it was overkill for this purpose, and that the sandboxes had already all the information needed to replicate from and to every other sandbox. I just had to expose the data in such a way that it can be used to plug one sandbox to the other.
Now every sandbox has a script named replicate_from
, and a companion script called metadata
. Using a combination of the two (in fact, replicate_from
on the would-be replica calls metadata
from the donor) we can quickly define the replication command needed for most situations.
Replication between single sandboxes
Before we tackle the most complex one, let’s demonstrate that the system works with a simple case.
There is a recipe named replication_between_single
that creates a file named, aptly, ./recipes/replication-between-single.sh
.
If you run it, you will see something similar to the following:
$ ./recipes/replication-between-single.sh 5.7.25
+ dbdeployer deploy single 5.7.25 --master --gtid --sandbox-directory=msb_5_7_25_1 --port-as-server-id
Database installed in $HOME/sandboxes/msb_5_7_25_1
run 'dbdeployer usage single' for basic instructions'
. sandbox server started
0
+ dbdeployer deploy single 5.7.25 --master --gtid --sandbox-directory=msb_5_7_25_2 --port-as-server-id
Database installed in $HOME/sandboxes/msb_5_7_25_2
run 'dbdeployer usage single' for basic instructions'
. sandbox server started
0
+ dbdeployer sandboxes --full-info
.--------------.--------.---------.---------------.--------.-------.--------.
| name | type | version | ports | flavor | nodes | locked |
+--------------+--------+---------+---------------+--------+-------+--------+
| msb_5_7_25_1 | single | 5.7.25 | [5725 ] | mysql | 0 | |
| msb_5_7_25_2 | single | 5.7.25 | [5726 ] | mysql | 0 | |
'--------------'--------'---------'---------------'--------'-------'--------'
0
+ $HOME/sandboxes/msb_5_7_25_1/replicate_from msb_5_7_25_2
Connecting to $HOME/sandboxes/msb_5_7_25_2
--------------
CHANGE MASTER TO master_host="127.0.0.1",
master_port=5726,
master_user="rsandbox",
master_password="rsandbox"
, master_log_file="mysql-bin.000001", master_log_pos=4089
--------------
--------------
start slave
--------------
Master_Log_File: mysql-bin.000001
Read_Master_Log_Pos: 4089
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
Exec_Master_Log_Pos: 4089
Retrieved_Gtid_Set:
Executed_Gtid_Set: 00005725-0000-0000-0000-000000005725:1-16
Auto_Position: 0
0
# Inserting data in msb_5_7_25_2
+ $HOME/sandboxes/msb_5_7_25_2/use -e 'create table if not exists test.t1 (id int not null primary key, server_id int )'
+ $HOME/sandboxes/msb_5_7_25_2/use -e 'insert into test.t1 values (1, @@server_id)'
# Retrieving data from msb_5_7_25_1
+ $HOME/sandboxes/msb_5_7_25_1/use -e 'select *, @@port from test.t1'
+----+-----------+--------+
| id | server_id | @@port |
+----+-----------+--------+
| 1 | 5726 | 5725 |
+----+-----------+--------+
The script deploys two sandboxes of the chosen version, using different directory names (dbdeployer takes care of choosing a free port) and then starts replication between the two using $SANDBOX1/replicate_from $SANDBOX2
. Then a quick test shows that the data created in a sandbox can be retrieved in the other.
Replication between group replication clusters
The method used to replicate between two group replications is similar to the one seen for single sandboxes. The script replicate_from
on the group top directory delegates the replication task to its first node, which points to the second group.
$ ./recipes/replication-between-groups.sh 5.7.25
+ dbdeployer deploy replication 5.7.25 --topology=group --concurrent --port-as-server-id --sandbox-directory=group_5_7_25_1
[...]
+ dbdeployer deploy replication 5.7.25 --topology=group --concurrent --port-as-server-id --sandbox-directory=group_5_7_25_2
[...]
+ dbdeployer sandboxes --full-info
.----------------.---------------------.---------.----------------------------------------.--------.-------.--------.
| name | type | version | ports | flavor | nodes | locked |
+----------------+---------------------+---------+----------------------------------------+--------+-------+--------+
| group_5_7_25_1 | group-multi-primary | 5.7.25 | [20226 20351 20227 20352 20228 20353 ] | mysql | 3 | |
| group_5_7_25_2 | group-multi-primary | 5.7.25 | [20229 20354 20230 20355 20231 20356 ] | mysql | 3 | |
'----------------'---------------------'---------'----------------------------------------'--------'-------'--------'
0
+ $HOME/sandboxes/group_5_7_25_1/replicate_from group_5_7_25_2
Connecting to $HOME/sandboxes/group_5_7_25_2/node1
--------------
CHANGE MASTER TO master_host="127.0.0.1",
master_port=20229,
master_user="rsandbox",
master_password="rsandbox"
, master_log_file="mysql-bin.000001", master_log_pos=1082
--------------
--------------
start slave
--------------
Master_Log_File: mysql-bin.000001
Read_Master_Log_Pos: 1082
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
Exec_Master_Log_Pos: 1082
Retrieved_Gtid_Set:
Executed_Gtid_Set: 00020225-bbbb-cccc-dddd-eeeeeeeeeeee:1-3
Auto_Position: 0
0
# Inserting data in group_5_7_25_2 node1
+ $HOME/sandboxes/group_5_7_25_2/n1 -e 'create table if not exists test.t1 (id int not null primary key, server_id int )'
+ $HOME/sandboxes/group_5_7_25_2/n1 -e 'insert into test.t1 values (1, @@server_id)'
# Retrieving data from one of group_5_7_25_1 nodes
# At this point, the data was replicated twice
+ $HOME/sandboxes/group_5_7_25_1/n2 -e 'select *, @@port from test.t1'
+----+-----------+--------+
| id | server_id | @@port |
+----+-----------+--------+
| 1 | 20229 | 20227 |
+----+-----------+--------+
The interesting thing about this recipe is that the sandboxes are created using the option --port-as-server-id
. While it was used also in the replication between single sandboxes as an excess of caution, in this recipe, and in all the recipes involving compound sandboxes, it is a necessity, as the replication would fail if primary and replica servers have the same server_id
.
All the work is done by the replicate_from
script, which knows how to check whether the target is a single sandbox or a composite one, and where to find the primary server.
Using a similar method, we can run more recipes on the same tune.
Replication between different things
I won’t reproduce the output of all recipes here. I will just mention what every recipe needs to prepare to ensure a positive outcome.
- Replication between NDB clusters. Nothing special here, except making sure to use a MySQL Cluster tarball. If you don’t dbdeployer will detect it and refuse the installation. For the rest, it’s like replication between groups.
- Replication between master/slave. This is a bit trickier, because the replication data comes to a master, and if we want to propagate to its slaves we need to activate
log-slave-update
. The recipe shows how to do it. - Replication between group and master/slave. In addition to the trick mentioned in the previous recipe, we need to make sure that the master/slave deployment is using GTID.
- Replication between master/slave and group. See the previous one.
- Replication between group and single (and vice versa). We just need to make sure the single sandbox has GTID enabled.
Replication between different versions
This is a simple recipe that comes from a feature request. All you need to do is make sure that the version on the master is lower than the one on the slaves. The recipe script replication-multi-versions.sh
, looks for tarballs of 5.6, 5.7, and 8.0, but you can start it using three versions that you’d like. For example:
./recipes/replication-multi-versions.sh 5.7.23 5.7.24 5.7.25
The first version will be used as the master.
Circular replication
I didn’t want to do this, as I consider ring replication to be weak and difficult to handle. I stated that much in the feature request and in the list of dbdeployer features. But then I saw that with the latest enhancements it was so easy, that I had to at least make a recipe for it. And then you have it. recipes/circular-replication.sh
does what it promises, but the burden of maintenance is still on the user’s shoulders. I suggest looking at it, and then forgetting it.
Upgrade from MySQL 5.5 to 8.0 (through 5.6 and 5.7)
This is one of the most advanced recipes. To enjoy it, you need to have expanded tarballs from 5.5, 5.6, 5.7, and 8.0.
Provided that you do, running this script will do the following:
- deploy MySQL 5.5
- Create a table
upgrade_log
and insert some data. - deploy MySQL 5.6
- run
mysql_upgrade
(through dbdeployer) - Add data to the log table
- deploy MySQL 5.7
- run
mysql_upgrade
again - add data to the log table
- deploy MySQL 8.0
- run
mysql_upgrade
for the last time - Show the data from the table
Here’s a full transcript of the operation. It’s interesting to see how the upgrade procedure has changed from older versions to current ones.
$ ./recipes/upgrade.sh
# ****************************************************************************
# Upgrading from 5.5.53 to 5.6.41
# ****************************************************************************
+ dbdeployer deploy single 5.5.53 --master
Database installed in $HOME/sandboxes/msb_5_5_53
run 'dbdeployer usage single' for basic instructions'
.. sandbox server started
0
+ dbdeployer deploy single 5.6.41 --master
Database installed in $HOME/sandboxes/msb_5_6_41
run 'dbdeployer usage single' for basic instructions'
. sandbox server started
0
+ $HOME/sandboxes/msb_5_5_53/use -e 'CREATE TABLE IF NOT EXISTS test.upgrade_log(id int not null auto_increment primary key, server_id int, vers varchar(50), urole varchar(20), ts timestamp)'
+ $HOME/sandboxes/msb_5_5_53/use -e 'INSERT INTO test.upgrade_log (server_id, vers, urole) VALUES (@@server_id, @@version, '\''original'\'')'
+ dbdeployer admin upgrade msb_5_5_53 msb_5_6_41
stop $HOME/sandboxes/msb_5_5_53
stop $HOME/sandboxes/msb_5_6_41
Data directory msb_5_5_53/data moved to msb_5_6_41/data
. sandbox server started
Looking for 'mysql' as: $HOME/opt/mysql/5.6.41/bin/mysql
Looking for 'mysqlcheck' as: $HOME/opt/mysql/5.6.41/bin/mysqlcheck
Running 'mysqlcheck' with connection arguments: '--port=5641' '--socket=/var/folders/rz/cn7hvgzd1dl5y23l378dsf_c0000gn/T/mysql_sandbox5641.sock'
Running 'mysqlcheck' with connection arguments: '--port=5641' '--socket=/var/folders/rz/cn7hvgzd1dl5y23l378dsf_c0000gn/T/mysql_sandbox5641.sock'
mysql.columns_priv OK
mysql.db OK
mysql.event OK
mysql.func OK
mysql.general_log OK
mysql.help_category OK
mysql.help_keyword OK
mysql.help_relation OK
mysql.help_topic OK
mysql.host OK
mysql.ndb_binlog_index OK
mysql.plugin OK
mysql.proc OK
mysql.procs_priv OK
mysql.proxies_priv OK
mysql.servers OK
mysql.slow_log OK
mysql.tables_priv OK
mysql.time_zone OK
mysql.time_zone_leap_second OK
mysql.time_zone_name OK
mysql.time_zone_transition OK
mysql.time_zone_transition_type OK
mysql.user OK
Running 'mysql_fix_privilege_tables'...
Running 'mysqlcheck' with connection arguments: '--port=5641' '--socket=/var/folders/rz/cn7hvgzd1dl5y23l378dsf_c0000gn/T/mysql_sandbox5641.sock'
Running 'mysqlcheck' with connection arguments: '--port=5641' '--socket=/var/folders/rz/cn7hvgzd1dl5y23l378dsf_c0000gn/T/mysql_sandbox5641.sock'
test.upgrade_log OK
OK
The data directory from msb_5_6_41/data is preserved in msb_5_6_41/data-msb_5_6_41
The data directory from msb_5_5_53/data is now used in msb_5_6_41/data
msb_5_5_53 is not operational and can be deleted
+ dbdeployer delete msb_5_5_53
List of deployed sandboxes:
$HOME/sandboxes/msb_5_5_53
Running $HOME/sandboxes/msb_5_5_53/stop
Running rm -rf $HOME/sandboxes/msb_5_5_53
Directory $HOME/sandboxes/msb_5_5_53 deleted
+ $HOME/sandboxes/msb_5_6_41/use -e 'INSERT INTO test.upgrade_log (server_id, vers, urole) VALUES (@@server_id, @@version, '\''upgraded'\'')'
+ $HOME/sandboxes/msb_5_6_41/use -e 'SELECT * FROM test.upgrade_log'
+----+-----------+------------+----------+---------------------+
| id | server_id | vers | urole | ts |
+----+-----------+------------+----------+---------------------+
| 1 | 5553 | 5.5.53-log | original | 2019-04-01 20:27:38 |
| 2 | 5641 | 5.6.41-log | upgraded | 2019-04-01 20:27:46 |
+----+-----------+------------+----------+---------------------+
# ****************************************************************************
# The upgraded database is now upgrading from 5.6.41 to 5.7.25
# ****************************************************************************
+ dbdeployer deploy single 5.7.25 --master
Database installed in $HOME/sandboxes/msb_5_7_25
run 'dbdeployer usage single' for basic instructions'
. sandbox server started
0
+ $HOME/sandboxes/msb_5_6_41/use -e 'CREATE TABLE IF NOT EXISTS test.upgrade_log(id int not null auto_increment primary key, server_id int, vers varchar(50), urole varchar(20), ts timestamp)'
+ $HOME/sandboxes/msb_5_6_41/use -e 'INSERT INTO test.upgrade_log (server_id, vers, urole) VALUES (@@server_id, @@version, '\''original'\'')'
+ dbdeployer admin upgrade msb_5_6_41 msb_5_7_25
stop $HOME/sandboxes/msb_5_6_41
stop $HOME/sandboxes/msb_5_7_25
Data directory msb_5_6_41/data moved to msb_5_7_25/data
.. sandbox server started
Checking if update is needed.
Checking server version.
Running queries to upgrade MySQL server.
Checking system database.
mysql.columns_priv OK
mysql.db OK
mysql.engine_cost OK
mysql.event OK
mysql.func OK
mysql.general_log OK
mysql.gtid_executed OK
mysql.help_category OK
mysql.help_keyword OK
mysql.help_relation OK
mysql.help_topic OK
mysql.host OK
mysql.innodb_index_stats OK
mysql.innodb_table_stats OK
mysql.ndb_binlog_index OK
mysql.plugin OK
mysql.proc OK
mysql.procs_priv OK
mysql.proxies_priv OK
mysql.server_cost OK
mysql.servers OK
mysql.slave_master_info OK
mysql.slave_relay_log_info OK
mysql.slave_worker_info OK
mysql.slow_log OK
mysql.tables_priv OK
mysql.time_zone OK
mysql.time_zone_leap_second OK
mysql.time_zone_name OK
mysql.time_zone_transition OK
mysql.time_zone_transition_type OK
mysql.user OK
Upgrading the sys schema.
Checking databases.
sys.sys_config OK
test.upgrade_log
error : Table rebuild required. Please do "ALTER TABLE `upgrade_log` FORCE" or dump/reload to fix it!
Repairing tables
`test`.`upgrade_log`
Running : ALTER TABLE `test`.`upgrade_log` FORCE
status : OK
Upgrade process completed successfully.
Checking if update is needed.
The data directory from msb_5_7_25/data is preserved in msb_5_7_25/data-msb_5_7_25
The data directory from msb_5_6_41/data is now used in msb_5_7_25/data
msb_5_6_41 is not operational and can be deleted
+ dbdeployer delete msb_5_6_41
List of deployed sandboxes:
$HOME/sandboxes/msb_5_6_41
Running $HOME/sandboxes/msb_5_6_41/stop
Running rm -rf $HOME/sandboxes/msb_5_6_41
Directory $HOME/sandboxes/msb_5_6_41 deleted
+ $HOME/sandboxes/msb_5_7_25/use -e 'INSERT INTO test.upgrade_log (server_id, vers, urole) VALUES (@@server_id, @@version, '\''upgraded'\'')'
+ $HOME/sandboxes/msb_5_7_25/use -e 'SELECT * FROM test.upgrade_log'
+----+-----------+------------+----------+---------------------+
| id | server_id | vers | urole | ts |
+----+-----------+------------+----------+---------------------+
| 1 | 5553 | 5.5.53-log | original | 2019-04-01 20:27:38 |
| 2 | 5641 | 5.6.41-log | upgraded | 2019-04-01 20:27:46 |
| 3 | 5641 | 5.6.41-log | original | 2019-04-01 20:27:51 |
| 4 | 5725 | 5.7.25-log | upgraded | 2019-04-01 20:28:01 |
+----+-----------+------------+----------+---------------------+
# ****************************************************************************
# The further upgraded database is now upgrading from 5.7.25 to 8.0.15
# ****************************************************************************
+ dbdeployer deploy single 8.0.15 --master
Database installed in $HOME/sandboxes/msb_8_0_15
run 'dbdeployer usage single' for basic instructions'
.. sandbox server started
0
+ $HOME/sandboxes/msb_5_7_25/use -e 'CREATE TABLE IF NOT EXISTS test.upgrade_log(id int not null auto_increment primary key, server_id int, vers varchar(50), urole varchar(20), ts timestamp)'
+ $HOME/sandboxes/msb_5_7_25/use -e 'INSERT INTO test.upgrade_log (server_id, vers, urole) VALUES (@@server_id, @@version, '\''original'\'')'
+ dbdeployer admin upgrade msb_5_7_25 msb_8_0_15
stop $HOME/sandboxes/msb_5_7_25
Attempting normal termination --- kill -15 10357
stop $HOME/sandboxes/msb_8_0_15
Data directory msb_5_7_25/data moved to msb_8_0_15/data
... sandbox server started
Checking if update is needed.
Checking server version.
Running queries to upgrade MySQL server.
Upgrading system table data.
Checking system database.
mysql.columns_priv OK
mysql.component OK
mysql.db OK
mysql.default_roles OK
mysql.engine_cost OK
mysql.func OK
mysql.general_log OK
mysql.global_grants OK
mysql.gtid_executed OK
mysql.help_category OK
mysql.help_keyword OK
mysql.help_relation OK
mysql.help_topic OK
mysql.host OK
mysql.innodb_index_stats OK
mysql.innodb_table_stats OK
mysql.ndb_binlog_index OK
mysql.password_history OK
mysql.plugin OK
mysql.procs_priv OK
mysql.proxies_priv OK
mysql.role_edges OK
mysql.server_cost OK
mysql.servers OK
mysql.slave_master_info OK
mysql.slave_relay_log_info OK
mysql.slave_worker_info OK
mysql.slow_log OK
mysql.tables_priv OK
mysql.time_zone OK
mysql.time_zone_leap_second OK
mysql.time_zone_name OK
mysql.time_zone_transition OK
mysql.time_zone_transition_type OK
mysql.user OK
Found outdated sys schema version 1.5.1.
Upgrading the sys schema.
Checking databases.
sys.sys_config OK
test.upgrade_log OK
Upgrade process completed successfully.
Checking if update is needed.
The data directory from msb_8_0_15/data is preserved in msb_8_0_15/data-msb_8_0_15
The data directory from msb_5_7_25/data is now used in msb_8_0_15/data
msb_5_7_25 is not operational and can be deleted
+ dbdeployer delete msb_5_7_25
List of deployed sandboxes:
$HOME/sandboxes/msb_5_7_25
Running $HOME/sandboxes/msb_5_7_25/stop
Running rm -rf $HOME/sandboxes/msb_5_7_25
Directory $HOME/sandboxes/msb_5_7_25 deleted
+ $HOME/sandboxes/msb_8_0_15/use -e 'INSERT INTO test.upgrade_log (server_id, vers, urole) VALUES (@@server_id, @@version, '\''upgraded'\'')'
+ $HOME/sandboxes/msb_8_0_15/use -e 'SELECT * FROM test.upgrade_log'
+----+-----------+------------+----------+---------------------+
| id | server_id | vers | urole | ts |
+----+-----------+------------+----------+---------------------+
| 1 | 5553 | 5.5.53-log | original | 2019-04-01 20:27:38 |
| 2 | 5641 | 5.6.41-log | upgraded | 2019-04-01 20:27:46 |
| 3 | 5641 | 5.6.41-log | original | 2019-04-01 20:27:51 |
| 4 | 5725 | 5.7.25-log | upgraded | 2019-04-01 20:28:01 |
| 5 | 5725 | 5.7.25-log | original | 2019-04-01 20:28:07 |
| 6 | 8015 | 8.0.15 | upgraded | 2019-04-01 20:28:20 |
+----+-----------+------------+----------+---------------------+
What else can we do?
The replication recipes seen so far use the same principles. The method used in these recipes doesn’t work for all-masters and fan-in replication, because mixing named channels and nameless ones is not allowed. Also, there are things that don’t respond to replication commands at all, like TiDB. But it should be easy to enhance the current scripts (or to add some more specialized ones) that will include also these exceptions. Given the recent wave of collaboration, I expect it will happen relatively soon.
No comments:
Post a Comment