Tuesday, March 19, 2019

dbdeployer community: Part 3 - MySQL Cluster (NDB)

I remember wanting to create MySQL Cluster sandboxes several years ago. By then, however, MySQL-Sandbox technology was not flexible enough to allow an easy inclusion, and the cluster software itself was not as easy to install as it is today. Thus, I kept postponing the implementation, until I started working with dbdeployer.

I included the skeleton of support for MySQL Cluster since the beginning (by keeping a range of ports dedicated for this technology, but I didn’t do anything until June 2018, when I made public my intentions to add support for NDB in dbdeployer with issue #20 (Add support for MySQL Cluster)). The issue had just a bare idea, but I needed help from someone, as my expertise with NDB was limited, and outdated.

Help came in November, when Daniël van Eeden started giving me bits of information on how to put together a cluster sandbox. I still resisted forcing my hand at the implementation, because by then I had realised that my method of checking the database server version to know whether it supported a given feature was inadequate to support anything other than vanilla MySQL or fully complaint forks.

The game changer was the cooperation with TiDB that opened the way for supporting Percona XtraDB Cluster. Even though these technologies are way different from MySQL Cluster, they forced me to improve dbdeployer’s code, making it more flexible, easier to enhance.

When I finally decided to start working on NDB, it took me only a few days to implement it, because I had all the pieces ready for this technology to become part of dbdeployer.

Following Däniel’s instructions, I had a prototype working, which I submitted to #dbdeployer channel on MySQL community slack. In that channel, I got help again from Däniel van Eeden, and then Frédéric Descamps summoned two more experts (Ted Wennmark and Bernd Ocklin), who gave me feedback, advice, and some quick lessons on how the cluster should work, which allowed me to publish a release (dbdeployer 1.23.0) this past week-end.

The implementation has some peculiarities for both users of dbdeployer and MySQL Cluster. For the ones used to dbdeployer, the biggest change is that we are deploying two entities, of which the main one is an NDB cluster, with its own directories and processes, while the MySQL servers are just the visible part of the cluster, but are, in fact, only cluster clients. Still, the cluster works smoothly in dbdeployer paradigm: the cluster is deployed (like Group replication or PXC) as a replication topology, and as such we can run the standard replication test and expect to get the same result that we would see when checking another multi-source deployment.

For people used to NDB, though, seeing NDB as “replication” feels odd, because the cluster is seeing as a distinct entity, and replication is when we transfer data between two clusters. If we were developing a dedicated tool for NDB clusters, this is probably what we would have done, but since we want dbdeployer integration, we must play by the general rules of the tool, where “single” is a stand-alone server instance, and we can’t have dbdeployer deploy single --topology=ndb, because single instance don’t have a topology, which is a property of a group of entities. Therefore, the price to pay for dbdeployer support accepting to see a MySQL cluster deployment as replication.

Now that we have covered all the philosophical angle, it’s time to show an example. Unlike PXC, which is requires Linux, MySQL Cluster can also run on MacOS, which makes my testing much easier.

The first step to run a cluster in dbdeployer is to download a tarball from dev.mysql.com/downloads/cluster, and then expand it in our usual directory ($HOME/opt/mysql):

$ dbdeployer unpack --prefix=ndb --flavor=ndb \  
    ~/Downloads/mysql-cluster-gpl-7.6.9-macos10.14-x86_64.tar.gz  
Unpacking tarball $HOME/Downloads/mysql-cluster-gpl-7.6.9-macos10.14-x86_64.tar.gz to $HOME/opt/mysql/ndb7.6.9  
[...]  
Renaming directory $HOME/opt/mysql/mysql-cluster-gpl-7.6.9-macos10.14-x86_64 to $HOME/opt/mysql/ndb7.6.9

We can repeat the same operation for MySQL Cluster 8.0.14, and in the end we will two expanded tarballs named ndb7.6.9 and ndb8.0.14. With this we can install a few clusters in the same host:

$ dbdeployer deploy replication ndb7.6 --topology=ndb --concurrent  
# ndb7.6 => ndb7.6.9  
$HOME/sandboxes/ndb_msb_ndb7_6_9/initialize_nodes  
MySQL Cluster Management Server mysql-5.7.25 ndb-7.6.9  
2019-03-18 23:47:15 [ndbd] INFO     -- Angel connected to 'localhost:20900'  
2019-03-18 23:47:16 [ndbd] INFO     -- Angel allocated nodeid: 2  
2019-03-18 23:47:16 [ndbd] INFO     -- Angel connected to 'localhost:20900'  
2019-03-18 23:47:16 [ndbd] INFO     -- Angel allocated nodeid: 3  
executing 'start' on node 1  
................ sandbox server started  
executing 'start' on node 2  
.. sandbox server started  
executing 'start' on node 3  
.. sandbox server started  
NDB cluster directory installed in $HOME/sandboxes/ndb_msb_ndb7_6_9  
run 'dbdeployer usage multiple' for basic instructions'


$ dbdeployer deploy replication ndb8.0 --topology=ndb --concurrent  
# ndb8.0 => ndb8.0.14  
$HOME/sandboxes/ndb_msb_ndb8_0_14/initialize_nodes  
MySQL Cluster Management Server mysql-8.0.14 ndb-8.0.14-dmr  
2019-03-18 23:45:53 [ndbd] INFO     -- Angel connected to 'localhost:21400'  
2019-03-18 23:45:53 [ndbd] INFO     -- Angel allocated nodeid: 2  
2019-03-18 23:45:53 [ndbd] INFO     -- Angel connected to 'localhost:21400'  
2019-03-18 23:45:53 [ndbd] INFO     -- Angel allocated nodeid: 3  
executing 'start' on node 1  
........ sandbox server started  
executing 'start' on node 2  
... sandbox server started  
executing 'start' on node 3  
.. sandbox server started  
NDB cluster directory installed in $HOME/sandboxes/ndb_msb_ndb8_0_14  
run 'dbdeployer usage multiple' for basic instructions'

If we look at the sandbox directories, we will see a few more subdirectories than we usually have with other topologies. For example:

 ndb_conf   # cluster configuration  
 ndbnode1   # management node (1)  
 ndbnode2   # data node (2)  
 ndbnode3   # data node (3)  
 node1      # MySQL node 1  
 node2      # MySQL node 2  
 node3      # MySQL node 3

The clusters are well framed into dbdeployer’s architecture, and they respond to standard commands like any other sandbox:

$ dbdeployer sandboxes --full-info  
.-------------------.------.-----------.----------------------------------------------.--------.-------.--------.  
|       name        | type |  version  |                    ports                     | flavor | nodes | locked |  
+-------------------+------+-----------+----------------------------------------------+--------+-------+--------+  
| ndb_msb_ndb7_6_9  | ndb  | ndb7.6.9  | [20900 27510 27511 27512 ]                   | ndb    |     3 |        |  
| ndb_msb_ndb8_0_14 | ndb  | ndb8.0.14 | [21400 28415 38415 28416 38416 28417 38417 ] | ndb    |     3 |        |  
'-------------------'------'-----------'----------------------------------------------'--------'-------'--------'

$ dbdeployer global status  
# Running "status_all" on ndb_msb_ndb7_6_9  
MULTIPLE  /Users/gmax/sandboxes/ndb_msb_ndb7_6_9  
node1 : node1 on  -  port   27510 (27510)  
node2 : node2 on  -  port   27511 (27511)  
node3 : node3 on  -  port   27512 (27512)

# Running "status_all" on ndb_msb_ndb8_0_14  
MULTIPLE  /Users/gmax/sandboxes/ndb_msb_ndb8_0_14  
node1 : node1 on  -  port   28415 (28415)  
node2 : node2 on  -  port   28416 (28416)  
node3 : node3 on  -  port   28417 (28417)

$ dbdeployer global test-replication  
# Running "test_replication" on ndb_msb_ndb7_6_9  
# master 1  
# master 2  
# master 3  
# slave 1  
ok - '3' == '3' - Slaves received tables from all masters  
# slave 2  
ok - '3' == '3' - Slaves received tables from all masters  
# slave 3  
ok - '3' == '3' - Slaves received tables from all masters  
# pass: 3  
# fail: 0

# Running "test_replication" on ndb_msb_ndb8_0_14  
# master 1  
# master 2  
# master 3  
# slave 1  
ok - '3' == '3' - Slaves received tables from all masters  
# slave 2  
ok - '3' == '3' - Slaves received tables from all masters  
# slave 3  
ok - '3' == '3' - Slaves received tables from all masters  
# pass: 3  
# fail: 0

Like other topologies, also the NDB cluster has a script that shows the status of the nodes:

$ ~/sandboxes/ndb_msb_ndb7_6_9/check_nodes  
+---------+-----------+---------------+--------+---------+-------------+-------------------+  
| node_id | node_type | node_hostname | uptime | status  | start_phase | config_generation |  
+---------+-----------+---------------+--------+---------+-------------+-------------------+  
|       2 | NDB       | localhost     |     58 | STARTED |           0 |                 1 |  
|       3 | NDB       | localhost     |     58 | STARTED |           0 |                 1 |  
|       1 | MGM       | localhost     |   NULL | NULL    |        NULL |              NULL |  
|       4 | API       | localhost     |   NULL | NULL    |        NULL |              NULL |  
|       5 | API       | localhost     |   NULL | NULL    |        NULL |              NULL |  
|       6 | API       | localhost     |   NULL | NULL    |        NULL |              NULL |  
|       7 | API       | localhost     |   NULL | NULL    |        NULL |              NULL |  
+---------+-----------+---------------+--------+---------+-------------+-------------------+  
Connected to Management Server at: localhost:20900  
Cluster Configuration  
---------------------  
[ndbd(NDB)] 2 node(s)  
id=2    @127.0.0.1  (mysql-5.7.25 ndb-7.6.9, Nodegroup: 0, *)  
id=3    @127.0.0.1  (mysql-5.7.25 ndb-7.6.9, Nodegroup: 0)

[ndb_mgmd(MGM)] 1 node(s)  
id=1    @127.0.0.1  (mysql-5.7.25 ndb-7.6.9)

[mysqld(API)]   4 node(s)  
id=4    @127.0.0.1  (mysql-5.7.25 ndb-7.6.9)  
id=5    @127.0.0.1  (mysql-5.7.25 ndb-7.6.9)  
id=6    @127.0.0.1  (mysql-5.7.25 ndb-7.6.9)  
id=7 (not connected, accepting connect from localhost)

It is possible that we will need more iterations to make the deployment more robust. When testing it, keep in mind that this deployment is only for testing, and it won’t probably have all the performance that you may find in a well deployed production cluster. Still, compared to other topologies, the replication tests performed faster than I expected.

Friday, March 08, 2019

dbdeployer community - Part 2: Percona XtraDB Cluster

This was not on the radar. I have never been proficient in Galera clusters and related technologies, and thus I hadn’t given much thought to Percona Xtradb Cluster (PXC), until Alkin approached me at FOSDEM, and proposed to extend dbdeployer features to support PXC. He mentioned that many support engineers at Percona use dbdeployer) on a daily basis and that the addition of PXC would be welcome.

I could not follow up much during the conference, but we agreed on making a proof-of-concept in an indirect way: if several nodes of PXC can run in the same host using shell scripts, dbdeployer could reproduce that behavior.

A few weeks later, when dbdeployer had already been enhanced with flavors and capabilities, I got the script that can deploy several nodes in the same host. It’s a simplification of the ones used in Percona PXC tests, which got me started.

I followed a method similar to the one I used for MySQL Group replication. The technology is similar, although the MySQL Team used a different approach for the installation. The basic principle is that the cluster needs two ports per node: in addition to the regular MySQL port, there is a communication port (SST or Snapshot State Transfer port) that is needed to exchange cluster data. Using this information, and following the sample in the script, I could produce a prototype that surprisingly worked at the first try!

The cluster did deploy, and the replication test, which comes free of charge when you implement a replication-type sandbox using standard templates, worked flawlessly.

Then I hooked the deployment method into dbdeployer concurrency engine, which is able to deploy several nodes at once. Here I hit the first problem. In PXC, the nodes are not equal at startup. The first node needs to be initialised without other nodes addresses, and it becomes the reference for other nodes to join the cluster. If I provided complete references for all nodes (as I do for MySQL Group Replication,) it didn’t work.

After some talk with Percona engineers on Slack, I figured out that the nodes can be deployed together, and the second and third node will just wait for the first one to come online and then join. That worked in principle, or when I deployed sequentially, but not when they are deployed all at once. Fortunately, dbdeployer has several ways of enabling debugging output, and after a few unsuccessful attempts I got the reason: PXC initialisation happens using rsync on port 4444. When the nodes are started sequentially, the receiving node takes control of port 4444 without conflicts, gets the job done and releases the port. When we deploy all nodes at once, there is a race for the possession of the synchronisation port, and a random node will win it, leaving the others waiting forever.

Thus, I modified the installation to allocate a different rsync port for each node, and after that the concurrent installation worked as well.

The last obstacle was the discovery that there is yet another port (IST, or Incremental State Transfer port), which is always one number bigger than the SST port. Thus, if the SST port is, say, 5555, the IST port is set to 5556. This means that, unlike other dbdeployer clusters, I can’t set port numbers incrementally, but I need to set them with an interval. I did that, and the cluster came with a default allocation of four ports per node (MySQL, rsync, SST, IST). If we also enable MySQLX, which comes includes as PXC binaries are based on MySQL 5.7, we would set 5 ports per node, and a majestic 15 ports for a three-node cluster.

Anyway, the support for Percona XtraDB Cluster is available in dbdeployer 1.21.0. Let’s see a sample session to use the new functionality.

$ $ dbdeployer --version
dbdeployer version 1.21.0

$ dbdeployer unpack --prefix=pxc ~/downloads/Percona-XtraDB-Cluster-5.7.25-rel28-31.35.1.Linux.x86_64.ssl100.tar.gz  
[...]  
Renaming directory $HOME/opt/mysql/Percona-XtraDB-Cluster-5.7.25-rel28-31.35.1.Linux.x86_64.ssl100 to $HOME/opt/mysql/pxc5.7.25

Before trying the cluster, it would be a good practice to make sure that your system can deploy a single node.

$ dbdeployer deploy single pxc5.7.25  
Database installed in $HOME/sandboxes/msb_pxc5_7_25  
run 'dbdeployer usage single' for basic instructions'  
. sandbox server started

$ $HOME/sandboxes/msb_pxc5_7_25/test_sb  
[...]  
# Tests :    11  
# PASS  :    11  
# fail  :     0

$ dbdeployer delete msb_pxc5_7_25  
[...]

And now for the real test:

$ dbdeployer deploy replication --topology=pxc pxc5.7.25  
Installing and starting node 1  
. sandbox server started  
Installing and starting node 2  
...... sandbox server started  
Installing and starting node 3  
..... sandbox server started  
Replication directory installed in $HOME/sandboxes/pxc_msb_pxc5_7_25  
run 'dbdeployer usage multiple' for basic instructions'

We should now see all the allocated ports.

$ dbdeployer sandboxes --header  
            name                       type             version                                      ports  
---------------------------- ------------------------ ----------- ----------------------------------------------------------------------------  
 pxc_msb_pxc5_7_25        :   Percona-Xtradb-Cluster   pxc5.7.25   [26226 26352 26353 26364 26227 26354 26355 26365 26228 26356 26357 26366 ]

If we want more detail, we can look at the sandbox description file:

$ cat $HOME/sandboxes/pxc_msb_pxc5_7_25/sbdescription.json  
{
    "basedir": "$HOME/opt/mysql/pxc5.7.25",  
    "type": "Percona-Xtradb-Cluster",  
    "version": "pxc5.7.25",  
    "flavor": "pxc",  
    "port": [  
        26226,  
        26352,  
        26353,  
        26364,  
        26227,  
        26354,  
        26355,  
        26365,  
        26228,  
        26356,  
        26357,  
        26366  
    ],  
    "nodes": 3,  
    "node_num": 0,  
    "dbdeployer-version": "1.21.0",  
    "timestamp": "Thu Mar  7 17:20:03 CET 2019",  
    "command-line": "dbdeployer deploy replication --topology=pxc pxc5.7.25"  
 }

Now we can run the replication test. Given that we have a cluster where all nodes are masters, the test will create a table in each node, and read the result in each slave (again, each node):

$ $HOME/sandboxes/pxc_msb_pxc5_7_25/test_replication  
# master 1  
# master 2  
# master 3  
# slave 1  
ok - '3' == '3' - Slaves received tables from all masters  
# slave 2  
ok - '3' == '3' - Slaves received tables from all masters  
# slave 3  
ok - '3' == '3' - Slaves received tables from all masters  
# pass: 3  
# fail: 0

It’s a simple test, but it tells us that the cluster is fully functional.

Thanks to Alkin and other Percona engineers who have tested the prototype in real time.

Wednesday, March 06, 2019

dbdeployer community - Part 1: TiDB

After a conference, when I take stock of what I have learned, I usually realise that the best achievements are the result of interacting with other attendees during the breaks, rather than simply listening to the lectures. It might be because I follow closely the blogosphere and thus the lectures have few surprises in store for me, or perhaps because many geeks take the conference as an excuse to refresh dormant friendships, catch up with technical gossip, and ask their friends some questions that were too sensitive to be discussed over Twitter and have been waiting for a chance of an in-person meeting to see the light of the day.

I surely had some of such questions, and I took advantage of the conference to ask them. As it often happens, I got satisfactory responses, but the latest FOSDEM conference was different than usual, because I got the best experience from the questions that others did ask me.

As it turned out, others were waiting for a chance to discuss things over coffee or food, and I saw that my pet project (dbdeployer) is a lot more popular than I thought, and it is being used silently in several environments. It should not be surprising if you read several MySQL reports on bugs at bugs.mysql.com where it is common the usage of sandboxes to reproduce user issues. Anyway, I got some praise, some requests, a few ideas for improvements, advance notice of an incoming graphical interface, and a few concrete collaboration proposals.

One of such proposals came from Morgan Tocker, who suggested enhancing dbdeployer to support TiDB. At first, it seemed uninteresting, as TiDB is designed to be distributed, and installing just a component didn’t immediately look useful. However, Morgan pointed out that it could be used as a tool to test compatibility with existing applications, and as such it could gain much more value than I initially thought. We decided to try a quick hackathon to make a proof of concept.

It was a great pleasure to figure out, in just over one hour of close interaction, that dbdeployer design for flexibility was up to the task. We managed to make TiDB work with dbdeployer simply by exporting, editing, and re-loading a few templates.

The exercise showed strengths and limitations in both projects. We agreed that dbdeployer had to lose some assumptions (such as “I am working with a MySQL server”) and become able to recognise which flavor of MySQL-lookalike we are dealing with. At the same time, we noted that TiDB is not compatible when it comes to deployment and bootstrap: it is so simple and straightforward that its initialisation doesn’t fit in the complex operation that is a MySQL server warm-up.

Pleased with the initial success, we kept in touch and, after dbdeployer acquired the ability of telling one flavor from another, we put together the various pieces to make dbdeployer recognise and install TiDB. We found and fixed several bugs in both project, and finally released dbdeployer 1.19.0, which can use a TiDB server transparently.

What does transparently mean? It means that tests for TiDB deployment can run alongside tests for other MySQL servers, and the sandbox scripts (such as start, stop, use, status, and test_sb) work as expected and produce a compatible output. Thus, there is a TiDB test running together with another dozen MySQL versions.

Now, if you want, you can evaluate TiDB in your computer without installing the full stack. It won’t be as fast as the real thing: what is installed as a single node is a slower emulation of the real database, but it is enough to give you an idea of what queries you can and cannot run in TiDB, and perhaps try to see if your application could run on TiDB at all.

The collaboration with TiDB was especially useful because the changes needed to smooth the TiDB integration have made made dbdeployer better suited to add support for more not-quite-mysql servers, such as the one that we’ll see in the next post.

But before reaching that point, here’s an example of TiDB deployment on Linux:

$ wget https://download.pingcap.org/tidb-master-linux-amd64.tar.gz  
[...]  
2019-02-24 04:46:26 (2.26 MB/s) - 'tidb-master-linux-amd64.tar.gz' saved [16304317/16304317]

$ dbdeployer unpack tidb-master-linux-amd64.tar.gz --unpack-version=3.0.0 --prefix=tidb  
Unpacking tarball tidb-master-linux-amd64.tar.gz to $HOME/opt/mysql/tidb3.0.0  
1
Renaming directory /home/msandbox/opt/mysql/tidb-master-linux-amd64 to /home/msandbox/opt/mysql/tidb3.0.0

TiDB tarballs doesn't come with a client. We need to use one from MYSQL 5.7. Rather than downloading the huge tarball from MySQL site, we can get a smaller one from a GitHub repository, using dbdeployer itself (NB: this reduced tarball is only for Linux)


$ dbdeployer remote list  
Files available in https://raw.githubusercontent.com/datacharmer/mysql-docker-minimal/master/dbdata/available.json  
5.7 -> [mysql-5.7.24 mysql-5.7.25]  
8.0 -> [mysql-8.0.13 mysql-8.0.15]  
4.1 -> [mysql-4.1.22]  
5.0 -> [mysql-5.0.15 mysql-5.0.96]  
5.1 -> [mysql-5.1.72]  
5.5 -> [mysql-5.5.61 mysql-5.5.62]  
5.6 -> [mysql-5.6.41 mysql-5.6.43]

$ dbdeployer remote get mysql-5.7.25  
File /home/msandbox/mysql-5.7.25.tar.xz downloaded

$ dbdeployer unpack mysql-5.7.25.tar.xz  
[...]  
Renaming directory /home/msandbox/opt/mysql/mysql-5.7.25 to /home/msandbox/opt/mysql/5.7.25

Now we are ready to install TiDB:

$ dbdeployer deploy single tidb3.0.0 --client-from=5.7.25  
Creating directory /home/msandbox/sandboxes  
Database installed in $HOME/sandboxes/msb_tidb3_0_0  
run 'dbdeployer usage single' for basic instructions'  
. sandbox server started

Once installed, a TiDB sandbox behaves like a MySQL sandbox.

$ $HOME/sandboxes/msb_tidb3_0_0/use  
Welcome to the MySQL monitor.  Commands end with ; or \g.  
Your MySQL connection id is 2  
Server version: 5.7.10-TiDB-v3.0.0-beta-111-g266ff4b6f MySQL Community Server (Apache License 2.0)

Copyright (c) 2000, 2019, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its  
affiliates. Other names may be trademarks of their respective  
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql [localhost:3000] {msandbox} ((none)) >

Friday, April 20, 2018

MySQL adjustment bureau


When maintainng any piece of software, we usually deal with two kind of actions:

  • bug fixing,
  • new features.

bugs and features

A bug happens when there is an error in the software, which does not behave according to the documentation or the specifications. In short, it's a breech of contract between the software maintainer and the users. The promise, i.e. the software API that was published at every major version, is broken, and the software must be reconciled with the expectations and fixed, so that it behaves again as the documentation says. When we fix a bug in this way, we increment the revision number of the software version (e.g. 1.0.0 to 1.0.1. See semantic versioning).

New features, in turn, can be of two types:

  • backward compatible enhancements, which add value to the software without breaking the existing functionality. This is the kind of change that requires an increment of the minor indicator in the version (for example: 1.1.15 to 1.2.0.)
  • Incompatible changes that break the existing behavior and require users to change their workflow. This kind of change requires bumping up the major number in the version (as in 2.1.3 to 3.0.0.)

Not a bug, nor a feature, but an adjustment.

The above concepts seem simple enough: you either fix something that's broken or add new functionality.

However, when maintaining a tool that has the purpose of helping users to deal with another software (as it is the case of dbdeployer that helps users to deploy MySQL databases) there is yet another category of changes that don't fall into the standard categories: it's what happens when the software being helped (MySQL) changes its behavior, which would break the normal functioning of the helping tool, giving the maintainer a difficult choice:

  • shall I modify the tool's interface to adapt to the new behavior, breaking existing procedures?
  • or shall I adapt the tool's functioning behind the scenes to keep the interface unchanged?

My philosophy with dbdeployer (and MySQL-Sandbox before it) is to preserve the tool's interface, so that users don't have to change existing procedures. I call this kind of changes adjustments, because they are not bugs, as they are not a consequence of a coding error, and not a feature, as the intervention is not a conscious decision to add new functionality, but an emergency operation to preserve the status quo. You can think of this category as a capricious change in specifications, which so often happens to software developers, with the difference that the one changing the specs is not the user, but a third party who doesn't know, or care, about our goal of preserving the API integrity.

For example, from MySQL 8.0.3 to 8.0.4 there was a change in the default authentication plugin. Instead of mysql_native_password, MySQL 8.0.4 uses caching_sha2_password. The immediate side effect for MySQL-Sandbox and dbdeployer was that replication doesn't work out of the box. A possible solution would be to force the old authentication plugin, but this would not allow users to test the new one. Since the main reason to use a tool like dbdeployer is to experiment with new releases safely, I had to keep the default behavior. Thus, I left the default plugin in place, and changed the way the replication works. It's an ugly workaround actually, but allows users to see the new behavior without losing existing functionality.
To complete the adjustment, I added a new option --native-auth-plugin, which would deploy using the old mysql_native_password. In total, the adjustment consists of a behind-the-scenes change, almost undetectable by users, and a new option to keep using the familiar authentication if users want it.

From the point of view of semantic versioning, this kind of change is a backward-compatible modification of the API, which warrants an increase of the minor number of the version.

Another example: when MySQL went from 8.0.4 to 8.0.11, it introduced a deal breaker change: the X Plugin is now loaded by default. This is easy for users of MySQL as a document store, as they don't need to enable the plugin manually, but bad news for anyone else, as the server is opening a port and a socket that many users may not choose to open voluntarily. What's worse, when installing more sandboxes of version 8.0.11 in the same host (for example in replication), one will succeed in reserving the plugin port and socket, while the others will have the error log populated with surprising errors about a socket being already in use.

The solution is similar to the previous one. When dbdeployer detect MySQL 8.0.11 or newer, it adds options to customize the mysqlx plugin port and socket, thus allowing a frictionless deployment where the new functionality is available to the brave experimenters. At the same time, I added a new option (--disable-mysqlx) for the ones who really don't want an extra port and socket in their servers, not even for testing.

These adjustment are usually costly additions. While the added code is not that much, they require extra tests, which are often complex and require more time to write and execute them. The process to add an adjustment goes mostly like this:

  • I dedicate my morning walk to think about the fix. Sometimes the fix requires several walks, while I decide the less intrusive solution.
  • If the walk has been fruitful, writing the code requires just a few minutes. If I missed something, I iterate.
  • Then the more difficult part: writing meaningful tests that prove that the adjustment is correct and it doesn't introduce side effects in any MySQL version. And of course the option that reintroduces the old behavior must be tested too.
  • A positive side effect of this exercise is that often I realize that I was missing a test for an important behavior and then I write down that as well. The test suite included 6,000+ tests 1 month ago, and now it has almost doubled.

Tuesday, April 03, 2018

Test MySQL 8.0 right in your computer

MySQL 8.0 GA is right around the corner. I don't have precise information about its release, as I don't work at Oracle. If I did, I would probably know, but I couldn't tell when the release is scheduled to appear because of company policies. I can, however, speculate and infer, based of my experience with previous releases. My personal assessment is that the release will appear before 9:00am PT on April 24, 2018. The "before" can be anything from a few minutes to one week in advance.
Then, again, it may not happen at all if someone finds an atrocious bug that needs to be fixed asap.

Either way, users are keen on testing the new release in its current state of release candidate. Here I show a few methods that allow you to have a taste of the new goodies without waiting for the triumphal (keynote) announcement.


1. Docker containers

If you are a docker user, using a container to test MySQL is a no brainer. Unlike virtual machines or standalone servers, a docker container comes ready to use, with nothing to configure. All you need to do is pulling the right image. As with every docker images, you pull once and then use as many times as you need.

There are two reliable images that contain the latest MySQL. One is called mysql:8.0 and is tagged as official, which means that it is released by the Docker maintenance team. The other one, which is released by the MySQL team, is called mysql/mysql-server:8.0.

$ docker pull mysql:8.0
8.0: Pulling from library/mysql
Digest: sha256:7004063f8bd0c7bade8d1c526b9b8f5188c8288f411d76ee4ba83131e00c6f02
Status: Downloaded newer image for mysql:8.0

$ docker pull mysql/mysql-server:8.0
8.0: Pulling from mysql/mysql-server
Digest: sha256:e81d95f788adb04a4d2fa5f6f7e9283ca0f6360fb518efe65af5a7377a4ec282
Status: Downloaded newer image for mysql/mysql-server:8.0

The mysql image is based on Debian, while the original package, as you would expect, is based on Oracle Linux.

Let's see how to run MySQL in a container.

$ docker run --name official  -e MYSQL_ROOT_PASSWORD=secret -d mysql:8.0
60ec307578a139f5083ded07e94d737690d287b1b95093878675983a5cc40174

$ docker run --name original -e MYSQL_ROOT_PASSWORD=secret \
    -d mysql/mysql-server:8.0
0c93bb4a97ffa53232a69732d3ae45413a443e38fa43ad6fdc4057168cba42d2

With the above commands we get two containers, one for the official image and one for the original one.
We can't use them straight away, though. We need to wait for the servers to be ready. An easy method to verify the status of the server is looking at docker logs:

$ docker logs original --tail 1
2018-04-01T21:23:30.395461Z 0 [System] [MY-010931] /usr/sbin/mysqld: ready for connections. Version: '8.0.4-rc-log'  socket: '/var/lib/mysql/mysql.sock'  port: 3306  MySQL Community Server (GPL).

$ docker logs original --tail 1
2018-04-01T21:23:30.395461Z 0 [System] [MY-010931] /usr/sbin/mysqld: ready for connections. Version: '8.0.4-rc-log'  socket: '/var/lib/mysql/mysql.sock'  port: 3306  MySQL Community Server (GPL).

Here, after about 10 seconds, both containers are ready to use. We can now access the servers. One easy method is through docker exec

$ docker exec -ti original mysql -psecret
mysql: [Warning] Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 15
Server version: 8.0.4-rc-log MySQL Community Server (GPL)

Copyright (c) 2000, 2018, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql>

A similar command would allow us to access the other container.

If you want to try replication, more work is needed. In these articles you will find more details on Docker operations, and examples of advanced deployments:


2. Sandboxes

A sandboxed database is deployed in a non-dedicated box, with its configuration altered in such a way that it will run independently from other similar deployment and even from databases running in the main space.
The granddaddy of the sandbox deployer was MySQL-Sandbox, which has recently evolved into the more powerful and easier to use dbdeployer.
You can use MySQL-Sandbox to test a MySQL 8.0 tarball on MacOS

$ make_sandbox --export_binaries  mysql-8.0.4-rc-macos10.13-x86_64.tar.gz

This command unpacks the tarball into $HOME/opt/mysql and deploys the database in $HOME/sandboxes/msb_8_0_4.
Until recently, the same command would work on Linux without modifications. In MySQL 8.0.4, though, the tarball organization for Linux has changed. There are symbolic links for SSL libraries inside the ./bin directory. Those symlinks are not extracted by default, but only if you use the option --keep-directory-symlink when opening the tarball. MySQL-Sandbox doesn't do it, also because this option is not standard to every version of tar.

Thus, if you want to use the old MySQL-Sandbox, you need to run the extraction manually.

$ cd $HOME/opt/mysql
$ tar -xzf  --keep-directory-symlink /tmp/mysql-8.0.4-rc-linux-glibc2.12-x86_64.tar.gz
$ mv mysql-8.0.4-rc-linux-glibc2.12-x86_64 8.0.4
$ make_sandbox 8.0.4

I don't recommend the above procedure, for either Linux or MacOS. The main reason, in addition to the manual operations involved, is that MySQL-Sandbox is not going to be updated for the time being. Instead, you should use dbdeployer, which has all the main features of MySQL-Sandbox and a lot of new ones. Here's the equivalent procedure:

$ dbdeployer unpack /tmp/mysql-8.0.4-rc-linux-glibc2.12-x86_64.tar.gz
$ dbdeployer deploy single 8.0.4
Database installed in $HOME/sandboxes/msb_8_0_4
run 'dbdeployer usage single' for basic instructions'
. sandbox server started

dbdeployer uses a different method to initialize the database server, which at the same time makes the initialization more visible and avoids the problem of the phantom SSL libraries.

Note: Tarballs for recent MySQL versions are really big. MySQL 8.0.4 binaries expand to 1.9 GB. If storage is an issue, you should get the tarballs from a collection of minimised tarballs (Linux only) for most MySQL versions. For now, it's maintained by me, but I hope that the the MySQL team will release something similar.

Once you have deployed a sandbox with MySQL 8.0, using it is easy:

$ cd $HOME/sandboxes/msb_8_0_4
$ ./use
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 8
Server version: 8.0.4-rc-log MySQL Community Server (GPL)

Copyright (c) 2000, 2018, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql [localhost] {msandbox} ((none)) >

dbdeployer creates several shortcuts for the most common commands to use the database. ./use is the most common, and provides access to the MySQL client with all the necessary options needed to use it correctly. For more information on what is available, run

$ dbdeployer usage single

This functionality would be enough to decide for a sandbox as your preferred method for testing. However, it this is only a tiny portion of what you can do with dbdeployer in your own computer. With a single command, you can test master/slave replication, multi-primary group replication, single primary group replication, fan-in, and all-masters topologies.

You can try the following commands:

$ dbdeployer deploy single 8.0.4
$ dbdeployer deploy replication 8.0.4
$ dbdeployer deploy replication 8.0.4 --topology=group
$ dbdeployer deploy replication 8.0.4 --topology=group --single-primary
$ dbdeployer deploy replication 8.0.4 --topology=all-masters
$ dbdeployer deploy replication 8.0.4 --topology=fan-in

If you have enough RAM, all these deployments will survive in parallel.
In my desktop, I can run:

$ dbdeployer sandboxes --header
name                        type                    version  ports
----------------            -------                 -------  -----
all_masters_msb_8_0_4     : all-masters               8.0.4 [15001 15002 15003]
fan_in_msb_8_0_4          : fan-in                    8.0.4 [14001 14002 14003]
group_msb_8_0_4           : group-multi-primary       8.0.4 [20009 20134 20010 20135 20011 20136]
group_sp_msb_8_0_4        : group-single-primary      8.0.4 [21405 21530 21406 21531 21407 21532]
msb_8_0_4                 : single                    8.0.4 [8004]
rsandbox_8_0_4            : master-slave              8.0.4 [19009 19010 19011]

When MySQL 8.0.11 is released, you can replace "8.0.4" with "8.0.11" and get a similar result.

BTW, you have seen that deploying replication sandboxes may take a long time. You may try adding --concurrent to each command, and enjoy a notable speed increase.

What else can you do with the sandboxes you have just deployed? Plenty! For a complete list, have a look at the online documentation. But for the moment, you may try this:

$ dbdeployer global status
$ dbdeployer global test
$ dbdeployer global test-replication

3. Other methods

Besides the methods that I recommend, there are others that you could use, but I won't advise about them as there are more qualified ones for that.

  • Standalone server. If you have the luxury of having one or more standalone servers sitting in a lab, by all means go for it. Just follow the instructions about installing MySQL on your lucky server. Be advised, though, that depending on the method you choose and the version of your operating system, you may face compatibility issues (.rpm or .deb dependencies).
  • Virtual machines. VMs share with standalone servers the same ease of installation (and the same dependency issues), only a bit slower. They are convenient, as you can use them to test in conditions that more closely resemble production settings, and if you use a configuration server such as Puppet or Ansible, your task of testing the new version could be greatly simplified. The instructions for the virtual machines are the same seen for standalone servers.

Monday, April 02, 2018

dbdeployer GA and semantic versioning

dbdeployer went into release candidate status a few weeks ago. Since then, I added no new features, but a lot of tests. The test suite now runs 3,000+ tests on MacOS and a bit more on Linux, for a grand total of 6,000+ tests that need to run at least twice: once with concurrency enabled and once without. I know that testing can't prove the absence of bugs, but I am satisfied with the results, since all this grinding has allowed me to find several bugs and fix them.

In this framework, I felt that dbdeployer could exit candidate status and get to version 1.0. This happened on March 26th. An immediate side effect of this change is that from this point on, dbdeployer must adhere to the semantic versioning principles:

A version number is made of Major, Minor, and Revision. When changes are applied, the following happens:

  • Backward-compatible bug fixes increment the Revision number (e.g. 1.0.0 to 1.0.1)
  • Backward-compatible new features increment the Minor number (1.0.1 to 1.1.0)
  • Backward incompatible changes (either features or bug fixes that break compatibility with the API) increment the Major number (1.15.9 to 2.0.0)

The starting API is defined in API-1.0.md, which was generated manually.
The file API-1.1.md contains the same API definition, but was generated automatically and can be used to better compare the initial API with further version.

So the app went from 1.0 to 1.1 in less than one week. In obedience to semantic versioning principles, if a new backward-compatible feature is added, the minor number of the version increases. What does backward-compatible mean? It means that commands, procedures, and workflows that were working with the previous version will also work with the current one. It's just that the new release will have more capabilities. In this case, the added feature is the ability of having environment variables HOME and PWD recognized and properly expanded in the configuration file. It's nothing very exciting, but changing the minor number gives the user a hint of what to expect from the new release.

Let's give a few examples:

  • Version goes from 1.0.0 to 1.0.1: It means that there are only bug fixes, and you should expect to use it without modifications.
  • Version goes from 1.0.1 to 1.1.0: You should be able to use dbdeployer just as before, but you should check the release notes to see what's new, because there are new functionalities that might be useful to you.
  • Version goes from 1.3.15 to 2.0.0: Danger! A major number bumped up means that something has changed in the API, which is now partially or totally incompatible with the previous release. Your workflow may break, and you must check the release notes and the documentation to learn how to use the new version.

This is different from other applications. For example, the MySQL server uses version numbers with hard to predict meaning:

  • MySQL 5.1, 5.5, 5.6, and 5.7 should be, in fact, major version number changes, not minor ones. Each one of them introduces incompatible changes that require careful review of the novelties.
  • Within the same version (such as MySQL 5.7) there are a lot of compatible and incompatible changes, although the minor number stays the same.

The plan with dbdeployer is to use the version number as a manifest, to give users an immediate feeling of what to expect. Rather than changing minor or major number only when the developers think there is some juicy new thing of which they can be proud, the version number will tell whether users should worry about compatibility or not.

In my general development plan, you are more likely to see versions like "1.25.16" than version "2.0," meaning that I will try to keep the current API valid as much as possible. A major version change will signify that a new feature could not fit in the current infrastructure and a new one would be needed.

You can draw your own conclusions here. A semantic versioning paradigm is unlikely to be adopted by most software vendors, because version numbers are often marketing gimmicks, and they can charge you more convincingly for a version 6.0 than for version 1.34.
Free software, OTOH, can do this. My goal with dbdeployer is to help the MySQL community, and I will achieve that goal more easily if my releases can be adopted without fear of incompatibility.


Monday, March 12, 2018

dbdeployer release candidate


The latest release of dbdeployer is possibly the last one with a leading 0. If no serious bugs are found in the next two weeks, the next release will bear a glorious 1.0.

Latest news

The decision to get out of the stream of pre-releases that were published until now comes because I have implemented all the features that I wanted to add: mainly, all the ones that I wished to add to MySQL-Sandbox but it would have been too hard:

The latest addition is the ability of running multi-source topologies. Now we can run four topologies:

  • master-slave is the default topology. It will install one master and two slaves. More slaves can be added with the option --nodes.
  • group will deploy three peer nodes in group replication. If you want to use a single primary deployment, add the option --single-primary. Available for MySQL 5.7 and later.
  • fan-in is the opposite of master-slave. Here we have one slave and several masters. This topology requires MySQL 5.7 or higher.
    all-masters is a special case of fan-in, where all nodes are masters and are also slaves of all nodes.

It is possible to tune the flow of data in multi-source topologies. The default for fan-in is three nodes, where 1 and 2 are masters, and 2 are slaves. You can change the predefined settings by providing the list of components:

$ dbdeployer deploy replication \
    --topology=fan-in \
    --nodes=5 \
    --master-list="1 2 3" \
    --slave-list="4 5" \
    8.0.4 \
    --concurrent

In the above example, we get 5 nodes instead of 3. The first three are master (--master-list="1 2 3") and the last two are slaves (--slave-list="4 5") which will receive data from all the masters. There is a test automatically generated to test replication flow. In our case it shows the following:

$ ~/sandboxes/fan_in_msb_8_0_4/test_replication
# master 1
# master 2
# master 3
# slave 4
ok - '3' == '3' - Slaves received tables from all masters
# slave 5
ok - '3' == '3' - Slaves received tables from all masters
# pass: 2
# fail: 0

The first three lines show that each master has done something. In our case, each master has created a different table. Slaves in nodes 5 and 6 then count how many tables they found, and if they got the tables from all masters, the test succeeds.
Note that for all-masters topology there is no need to specify master-list or slave-list. In fact, those lists will be auto-generated, and they will both include all deployed nodes.

What now?

Once I make sure that the current features are reasonably safe (I will only write more tests for the next 10~15 days) I will publish the first (non-pre) release of dbdeployer. From that moment, I'd like to follow the recommendations of the Semantic Versioning:

  • The initial version will be 1.0.0 (major, minor, revision);
  • The spects for 1.0 will be the API that needs to be maintained.
  • Bug fixes will increment the revision counter.
  • New features that don't break compatibility with the API will increment the minor counter;
  • New features or changes that break compatibility will trigger a major counter increment.

Using this method will give users a better idea of what to expect. If we get a revision number increase, it is only bug fixes. An increase in the minor counter means that there are new features, but all previous features work as before. An increase in the major counter means that something will break, either because of changed interface or because of changed behavior.
In practice, the tests released with 1.0.0 should run with any 1.x subsequent version. When those tests need changes to run correctly, we will need to bump up the major version.

Let's see if this method is sustainable. So far, I haven't had need to do behavioural changes, which are usually provoked by new versions of MySQL that introduce incompatible behavior (definitely MySQL does not follow the Semantic Versioning principles.) When the next version becomes available, I will see if this RC of dbdeployer can stand its ground.


Sunday, March 11, 2018

Concurrent sandbox deployment


Version 0.3.0 of dbdeployer has gained the ability of deploying multiple sandboxes concurrently. Whenever we deploy a group of sandboxes (replication, multiple) we can use the --concurrent flag, telling dbdeployer that it should run operations concurrently.

What happens when a single sandbox gets deployed? There are six sets of operations:

  1. Create the sandbox directory and write down its scripts;
  2. Run the initialisation script;
  3. Start the database server;
  4. Run the pre-grants SQL commands (if any;)
  5. Load the grants;
  6. Run the post-grants SQL commands (if any;)

When several sandboxes are deployed concurrently, dbdeployer runs only the first step, and then creates a list of commands with an associated priority index. These commands are assembled for every sandbox, and then executed concurrently for every step.
The sequence of events for a deployment of three sandboxes in replication would be like this:

  1. Create the sandbox skeleton for every sandbox;
  2. Initialise all database servers;
  3. start all the servers;
  4. run the pre-grants, grants, post-grants scripts.
  5. Runs the group initialisation script (start master and slaves, or setup group replication).

Depending on the computer architecture, the server version, and the number of nodes, the speed of deployment can increase from 2 to 5 times.

Let's see an example:

$ time dbdeployer deploy replication 5.7.21
[...]
real    0m13.789s
user    0m1.143s
sys 0m1.873s

$ time dbdeployer deploy replication 5.7.21 --concurrent
[...]
real    0m7.780s
user    0m1.329s
sys 0m1.811s

There is a significant speed increase. The gain rises sharply if we use an higher number of nodes.

$ time dbdeployer deploy replication 5.7.21 --nodes=5
[...]
real    0m23.425s
user    0m1.923s
sys 0m3.106s

$ time dbdeployer deploy replication 5.7.21 \
    --nodes=5 --concurrent
[...]
real    0m7.686s
user    0m2.248s
sys 0m2.777s

As we can see, the time for deploying 5 nodes is roughly the same used for 3 nodes. While the sequential operations take time proportionally with the number of nodes, the concurrent task stays almost constant.

Things a re a bit different for group replication, as the group initialisation (which happens after all the servers are up and running) takes more time than the simple master/slave deployment, and can't be easily reduced using the current code.

A similar optimisation happens when we delete multiple sandboxes. Here the operation is at sandbox level (1 replication cluster = 1 sandbox) not at server level, and for that reason the gain is less sharp. Still, operations are noticeably faster.

There is room for improvement, but I have seen that the total testing time for dbdeployer test suite has dropped from 26 to 15 minutes. I think it was a week end well spent.

Monday, March 05, 2018

Customizing dbdeployer


As of version 0.2.1, dbdeployer allows users to customize composite sandboxes more than ever. This is done by manipulating the default settings, which are used to deploy the sandbox templates.

In order to appreciate the customization capabilities, let's start with a vanilla deployment, and then we have a look at the possible changes.

$ dbdeployer deploy replication 8.0.4
Installing and starting master
Database installed in $HOME/sandboxes/rsandbox_8_0_4/master
. sandbox server started
Installing and starting slave 1
Database installed in $HOME/sandboxes/rsandbox_8_0_4/node1
. sandbox server started
Installing and starting slave 2
Database installed in $HOME/sandboxes/rsandbox_8_0_4/node2
. sandbox server started
$HOME/sandboxes/rsandbox_8_0_4/initialize_slaves
initializing slave 1
initializing slave 2
Replication directory installed in $HOME/sandboxes/rsandbox_8_0_4
run 'dbdeployer usage multiple' for basic instructions'

A regular replication sandbox has one master and two slaves. Each slave is inside a directory called nodeX.

The resulting sandbox has a directory called master, two nodeX directories, a shortcut for the master called m, and two shortcuts for the slaves called s1 and s2. There are also two management scripts called initialize_slaves and check_slaves.

    $ ls -l ~/sandboxes/rsandbox_8_0_4/
    total 152
    -rwxr--r--   1 user  staff  1500 Mar  5 06:21 check_slaves
    -rwxr--r--   1 user  staff  1160 Mar  5 06:21 clear_all
    -rwxr--r--   1 user  staff  1617 Mar  5 06:21 initialize_slaves
    -rwxr--r--   1 user  staff   806 Mar  5 06:21 m
    drwxr-xr-x  22 user  staff   748 Mar  5 06:21 master
    -rwxr--r--   1 user  staff   806 Mar  5 06:21 n1
    -rwxr--r--   1 user  staff   804 Mar  5 06:21 n2
    -rwxr--r--   1 user  staff   804 Mar  5 06:21 n3
    drwxr-xr-x  23 user  staff   782 Mar  5 06:21 node1
    drwxr-xr-x  23 user  staff   782 Mar  5 06:21 node2
    -rwxr--r--   1 user  staff   855 Mar  5 06:21 restart_all
    -rwxr--r--   1 user  staff   804 Mar  5 06:21 s1
    -rwxr--r--   1 user  staff   804 Mar  5 06:21 s2
    -rw-r--r--   1 user  staff   173 Mar  5 06:21 sbdescription.json
    -rwxr--r--   1 user  staff  1127 Mar  5 06:21 send_kill_all
    -rwxr--r--   1 user  staff  1296 Mar  5 06:21 start_all
    -rwxr--r--   1 user  staff  1680 Mar  5 06:21 status_all
    -rwxr--r--   1 user  staff  1087 Mar  5 06:21 stop_all
    -rwxr--r--   1 user  staff  4598 Mar  5 06:21 test_replication
    -rwxr--r--   1 user  staff  1315 Mar  5 06:21 test_sb_all
    -rwxr--r--   1 user  staff  1100 Mar  5 06:21 use_all

Now, let's see how we can change this. We'll start by listing the current defaults

$ dbdeployer defaults show
# Internal values:
{
    "version": "0.2.1",
    "sandbox-home": "$HOME/sandboxes",
    "sandbox-binary": "$HOME/opt/mysql",
    "master-slave-base-port": 11000,
    "group-replication-base-port": 12000,
    "group-replication-sp-base-port": 13000,
    "fan-in-replication-base-port": 14000,
    "all-masters-replication-base-port": 15000,
    "multiple-base-port": 16000,
    "group-port-delta": 125,
    "master-name": "master",
    "master-abbr": "m",
    "node-prefix": "node",
    "slave-prefix": "slave",
    "slave-abbr": "s",
    "sandbox-prefix": "msb_",
    "master-slave-prefix": "rsandbox_",
    "group-prefix": "group_msb_",
    "group-sp-prefix": "group_sp_msb_",
    "multiple-prefix": "multi_msb_",
    "fan-in-prefix": "fan_in_msb_",
    "all-masters-prefix": "all_masters_msb_"
 }

The values that we want to change are master-name, master-abbr, node-prefix, slave-prefix, and slave-abbr. We can export the defaults to a file, and import them after editing the values we want to change.

$ dbdeployer defaults export defaults.json
# Defaults exported to file defaults.json
$ vim defaults.json
$ dbdeployer defaults import defaults.json
Defaults imported from defaults.json into $HOME/.dbdeployer/config.json

Now dbdeployer is using the new defaults.


$ dbdeployer defaults show
# Configuration file: $HOME/.dbdeployer/config.json
{
    "version": "0.2.1",
    "sandbox-home": "/Users/gmax/sandboxes",
    "sandbox-binary": "/Users/gmax/opt/mysql",
    "master-slave-base-port": 11000,
    "group-replication-base-port": 12000,
    "group-replication-sp-base-port": 13000,
    "fan-in-replication-base-port": 14000,
    "all-masters-replication-base-port": 15000,
    "multiple-base-port": 16000,
    "group-port-delta": 125,
    "master-name": "primary",
    "master-abbr": "p",
    "node-prefix": "branch",
    "slave-prefix": "replica",
    "slave-abbr": "r",
    "sandbox-prefix": "msb_",
    "master-slave-prefix": "rsandbox_",
    "group-prefix": "group_msb_",
    "group-sp-prefix": "group_sp_msb_",
    "multiple-prefix": "multi_msb_",
    "fan-in-prefix": "fan_in_msb_",
    "all-masters-prefix": "all_masters_msb_"
 }
We have now *primary* for *master*, *replica* for *slave*, *branch* for *node*, and the abbreviations for master and slave changed to *p* and *r* respectively.
Let's see how these defaults can play together when we run the same command as we did before for replication. We first remove the previous deployment.

$ dbdeployer delete rsandbox_8_0_4
List of deployed sandboxes:
$HOME/sandboxes/rsandbox_8_0_4
Running $HOME/sandboxes/rsandbox_8_0_4/stop_all
# executing "stop" on $HOME/sandboxes/rsandbox_8_0_4
executing "stop" on slave 1
executing "stop" on slave 2
executing "stop" on master
Running rm -rf $HOME/sandboxes/rsandbox_8_0_4
Sandbox $HOME/sandboxes/rsandbox_8_0_4 deleted

The deployment command is the same as before, but the output changes:

$ dbdeployer deploy replication 8.0.4
Installing and starting primary
Database installed in $HOME/sandboxes/rsandbox_8_0_4/primary
. sandbox server started
Installing and starting replica 1
Database installed in $HOME/sandboxes/rsandbox_8_0_4/branch1
. sandbox server started
Installing and starting replica 2
Database installed in $HOME/sandboxes/rsandbox_8_0_4/branch2
.. sandbox server started
$HOME/sandboxes/rsandbox_8_0_4/initialize_replicas
initializing replica 1
initializing replica 2
Replication directory installed in $HOME/sandboxes/rsandbox_8_0_4
run 'dbdeployer usage multiple' for basic instructions'

This looks already as if our defaults have been adopted. Let's see the sandbox itself:

$ ls -l ~/sandboxes/rsandbox_8_0_4/
total 152
drwxr-xr-x  23 user  staff   782 Mar  5 06:45 branch1
drwxr-xr-x  23 user  staff   782 Mar  5 06:45 branch2
-rwxr--r--   1 user  staff  1515 Mar  5 06:45 check_replicas
-rwxr--r--   1 user  staff  1170 Mar  5 06:45 clear_all
-rwxr--r--   1 user  staff  1629 Mar  5 06:45 initialize_replicas
-rwxr--r--   1 user  staff   807 Mar  5 06:45 n1
-rwxr--r--   1 user  staff   806 Mar  5 06:45 n2
-rwxr--r--   1 user  staff   806 Mar  5 06:45 n3
-rwxr--r--   1 user  staff   807 Mar  5 06:45 p
drwxr-xr-x  22 user  staff   748 Mar  5 06:45 primary
-rwxr--r--   1 user  staff   806 Mar  5 06:45 r1
-rwxr--r--   1 user  staff   806 Mar  5 06:45 r2
-rwxr--r--   1 user  staff   855 Mar  5 06:45 restart_all
-rw-r--r--   1 user  staff   173 Mar  5 06:45 sbdescription.json
-rwxr--r--   1 user  staff  1137 Mar  5 06:45 send_kill_all
-rwxr--r--   1 user  staff  1308 Mar  5 06:45 start_all
-rwxr--r--   1 user  staff  1700 Mar  5 06:45 status_all
-rwxr--r--   1 user  staff  1097 Mar  5 06:45 stop_all
-rwxr--r--   1 user  staff  4613 Mar  5 06:45 test_replication
-rwxr--r--   1 user  staff  1325 Mar  5 06:45 test_sb_all
-rwxr--r--   1 user  staff  1106 Mar  5 06:45 use_all

We see that the new defaults were used and the script names have changed. But the differences are deeper than this. Also the internal values in the scripts were changed accordingly.

$ ~/sandboxes/rsandbox_8_0_4/test_replication
# primary log: mysql-bin.000001 - Position: 14073 - Rows: 20
# Testing replica #1
ok - replica #1 acknowledged reception of transactions from primary
ok - replica #1 IO thread is running
ok - replica #1 SQL thread is running
ok - Table t1 found on replica #1
ok - Table t1 has 20 rows on #1
# Testing replica #2
ok - replica #2 acknowledged reception of transactions from primary
ok - replica #2 IO thread is running
ok - replica #2 SQL thread is running
ok - Table t1 found on replica #2
ok - Table t1 has 20 rows on #2
# Tests :    10
# failed:     0 (  0.0%)
# PASSED:    10 (100.0%)
# exit code: 0

The test script calls the components with the names that we defined in the new defaults. Let's have a look at what the shortcuts for the master and slaves (now primary and replicas) do:

$ ~/sandboxes/rsandbox_8_0_4/p
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 35
Server version: 8.0.4-rc-log MySQL Community Server (GPL)

Copyright (c) 2000, 2018, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

primary [localhost] {msandbox} ((none)) >

$ ~/sandboxes/rsandbox_8_0_4/r1
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 15
Server version: 8.0.4-rc-log MySQL Community Server (GPL)

Copyright (c) 2000, 2018, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

replica1 [localhost] {msandbox} ((none)) >

Also the internal prompt has been adapted to the new naming.

Should we want to revert to the old behavior, we can just reset the defaults:

$ dbdeployer defaults reset
#File $HOME/.dbdeployer/config.json removed

The current replication sandbox is left untouched, but the next one will use the default values.

If we don't want to change the defaults permanently, there is an alternative. The --defaults flag allows us to change defaults on-the-fly just for the command we're running. For example, we could have achieved the same result, without editing the configuration file, using this command:

    dbdeployer deploy replication 8.0.4 \
        --defaults=master-name:primary \
        --defaults=master-abbr:p \
        --defaults=slave-prefix:replica \
        --defaults=slave-abbr:r \
        --defaults=node-prefix:branch

The syntax for --defaults requires the name of the variable and the new value, separated by a colon. The flag can be used as many times as needed.

MySQL security for real users


Security features overview

One of Oracle's tenets is the focus on security. For this reason, when it took over the stewardship of MySQL, it started addressing the most common issues. It was not quick acting, but we have seen real progress:

  1. MySQL 5.7 has removed the anonymous accounts, which was the greatest threat to security. Because of those accounts, and the default privileges granted to them, users without any privileges could access the "test" database and do serious damage. Additionally, because of the way the privilege engine evaluates accounts, anonymous users could hijack legitimate users, by preventing them to work properly.
  2. The "root" account now comes with a password defined during initialization. This is good news for security, but bad news for how the change was implemented.
  3. There is a new way of setting an options file for connection credentials: the mysql_config_editor paired with option --login-path allows users to store encrypted credentials for secure use. Also here, while we should rejoice for the added security, we can't help feeling that the implementation is yet again far from meeting users needs.
  4. There is an useful warning (introduced in MySQL 5.6) when using a password on the command line, telling users that it is a risk. Also in this case, we have a usability issue: while users care about their production deployments and use option files to avoid using passwords on the command line, there are, nonetheless, a lot of testing scripts, used in safe environment or with non-valuable data, where a password in the command line was not an issue, and the new warning simply screws up the result of those carefully crafted tests. This change, which can't be overcome without modifying the MySQL clients code, needs users to change their existing tests to adapt to the new behavior.
  5. MySQL 8 introduces roles, which simplify the accounts management. There are some minor usability issues, although in general the feature meets expectations.

This is the scenario of the main enhancements in MySQL since 5.6. Each one of them has some usability problems, some minor, some really bad.
We will first have a look at the problems mentioned above, and then examine the root cause for why they have arisen.


Usability issues

I start by noticing that some developers in the MySQL team have been working there for many years, starting with the time when MySQL was a different database and was used really differently.
In those times, managing the database meant that a human (the DBA) would run operations manually, take a look at the result, and adjust when needed. And then, when things went wrong, the same human explored the database system to find out what happened, took action, and went back to sleep.

Human-centered management leads to human problems: lazy DBA left their databases without password, using the root account, and exposing the server to uninspired attacks; they used passwords on the command line, without caring for options files (or without knowing about them.) Careless DBAs did not deal with anonymous users, leaving a dangerous backdoor in their server.

Some of the new functionalities introduced in the latest MySQL versions are aimed at this type of users: when you install MySQL, you get a message saying: your root password is ************, and the lazy DBAs have no option but to take note and use it. When they use the password on the command line, the annoying warning forces them to start using an options file or the mysql_config_editor.

This is all good, but the main problem here is that the DBAs of 10 years ago are on the verge of extinction. They are replaced by a new breed of DBAs who are not lazy, because they can't afford to be, and need to use dozens, hundreds, thousands of databases at once, using configuration management tools that don't require manual intervention, and actually abhor it. In the land of automation, some of the MySQL security enhancements are not seen as a solution, but as new problems.

Let's see an interesting example: docker containers.

Using Docker, MySQL images are deployed using a password on the command line. This is done for compatibility with the first implementation of the image maintained by the Docker team, where you deploy with this syntax:

docker run -e MYSQL_ROOT_PASSWORD=secret -d mysql

The MYSQL_ROOT_PASSWORD is a directive that becomes an environment variable inside the container, and the server uses it during initialization. As you can imagine, this is not recommended for a secure environment. Then, what's the MySQL team recommendation? They suggest the same strategy used for manual installation: set a directive MYSQL_RANDOM_ROOT_PASSWORD that results in a random password being generated, then collected by the DBA and used. Alternatively, the directive MYSQL_ONETIME_PASSWORD will force the root user to change the password on first connection.

The above suggestions were designed with the ancient DBA still in mind, while container deployment is even more automated than VMs, and it is based on the principle of immutable objects, i.e. containers that spring up from the cloud ready to run, with no configuration needed, and especially no configuration that requires someone (or some tool) to extract a new password from a log. I proposed a different solution, that would never show passwords on the command line and while it was implemented, but it still feels like a hack to circumvent an inadequate design.

As a result, the implementation inside the MySQL recommended Docker image uses "--initialize-insecure" to start the server. This is an implicit recognition of the bad design of the initialization feature. What was designed to overcome DBA's laziness becomes an obstacle towards automation.

We have a similar problem with mysql_config_editor: the tool will create a safe configuration file with credentials for multiple instances, but the password must be inserted manually. Consequently, this potentially useful feature doesn't get adopted, because it would be too difficult or impossible to automate properly.

We have seen that, of the security features that were introduced lately, only a few can be used safely in an automated environment, and all of them have at least one small usability quirk. I have talked about a confusing issue related to the removal of anonymous users where in their eagerness of removing the vulnerability the MySQL team removed also the "test" database, which was a consequence, not the cause of the problem. And I have recently talked about roles usability where there are still open problems, like the ability of telling roles from users which are apparently not considered a bug by the MySQL team.

All the above considerations led me to ask: how did we get to this point? There is an active community, and feedback is offered often with plenty of detail. How come we have such an abundance of usability issues? Don't the developers spend time with users at conferences to learn what they expect? Don't they read articles and blog posts about how a new feature meets expectations? Don't they talk to customers who have adopted new features? They certainly do. Then, why the usability problems persist?

What follows is my observation and speculation on this matter.


Disconnection between MySQL developers and users community

My experience working with system providers has put me in contact with many users. I have seen that in most cases users are very much protective of their current deployment, because it took them long time to get it right, and they don't upgrade unless they don't have another choice. I've seen users test the newer versions, realize that they would break some of their procedures, and defer the upgrade to better times that never come. I remember last year a user with a not so large set of servers was considering an upgrade to MySQL 5.6, while 5.7 had been GA for two years. The reason was a set of incompatibilities that made the upgrade too difficult.

For companies that deal with thousands of servers, the problem is similar, but exacerbated by the quantity of servers to upgrade and the need to do it without stopping operations. This latest requirement has made some users decide not to use GTID, because it required offline time for a master, and they hadn't had time enough to test the upgrade to MySQL 5.7 that would solve that problem.

For one reason or the other, many companies upgrade only two or three years after a given version became GA. And this is the main problem: until they use it in production, or at least test the version for a projected upgrade, users can't give valuable feedback, the one that is related to usage in production, and when they do, the version for which they provide feedback has been GA for long time, and can't be changed, while the next one is already close to GA, and as such will be untouchable.

The MySQL team gets feedback on a release from a handful of curious users who don't delay testing until the new version is GA, but don't provide the kind of important feedback that get the development team attention, such as deployment in production by large customers. In many cases, large customers are the ones that upgrade several years after GA, and by then their input is difficult to act upon.

We have then a curious situation, where the warnings given by the early software explorers are confirmed years later by the users to which the MySQL team listens more closely, but by then the next version of the server is already locked in a release schedule that nobody wants to alter to fix usability issues.

How can we solve this problem? Simple: listen to early software explorers and try to fix problems before GA.

Thursday, March 01, 2018

Using MySQL 8.0: what to expect

Mysql8

MySQL 8.0 will be GA soon (just my assumption: Oracle doesn't tell me anything about its release plans) and it's time to think about having a look at it.
If this is your first try of MySQL 8, get prepared for several impacting differences from previous versions.

In this article I won't tell you what you can do with MySQL 8: there is plenty of material about this, including in this very blog. I will instead concentrate on differences from previous versions that users need to know if they want to avoid surprises.

Data Directory

Let's start with an observation of the data directory.
After a standard installation, without any additional options, I see the following:

Files that I expected to see

auto.cnf
ib_buffer_pool
ib_logfile0
ib_logfile1
ibdata1
ibtmp1
(dir) mysql
(dir) performance_schema
(dir) sys

These files are also present in 5.7.

Files that are new in 8.0

binlog.000001
binlog.index

log-bin is ON by default. You need to remember this if you are using a MySQL server for a benchmark test that used to run without binary logs.

ca-key.pem
ca.pem
client-cert.pem
client-key.pem
private_key.pem
public_key.pem
server-cert.pem
server-key.pem

Now the MySQL generates all the certificates needed to run connections securely. This will greatly simplify your task when setting up a new instance.

mysql.ibd

This was completely unexpected! The mysql database has now its own tablespace. This is probably due to the new Data Dictionary, which is implemented in InnoDB. You will notice that all the InnoDB tables in MySQL use this tablespace, not only dictionary tables. This will help keeping administrative data separate from operational data in the rest of the server.

undo_001
undo_002

The undo logs have now their own tablespace by default.

Global variables

There are a lot of changes in global variables. Here's the list of what will impact your work when you use MySQL 8.0 for the first time:

character_set_client        utf8mb4
character_set_connection    utf8mb4
character_set_database      utf8mb4
character_set_results       utf8mb4
character_set_server        utf8mb4

All character sets are now utf8mb4. In MySQL 5.7, the default values are a mix of utf8 and latin1.

default_authentication_plugin   caching_sha2_password

This is huge. Using this plugin, passwords are stored in a different way, which guarantees more security, but will probably break several workflows among the users. The bad thing about this change implementation is that this password format contains characters that don't display well on screen, and you can see garbled output when inspecting the "user" table.

local_infile    OFF

Loading local files is now prevented by default. If you have a workflow that requires such operations, you need to enable it.

log_bin ON
log_slave_updates   ON

We've seen from an inspection of the local directory that binary logging is enabled by default. But also very important is that log_slave_update is enabled. This is important to have slaves ready to replace a master, but will severely affect performance in those scenarios where some slaves were supposed to run without that feature.

master_info_repository  TABLE
relay_log_info_repository   TABLE

Also impacting performance is the setting for replication repositories, which are now on TABLE by default. This is something that should have happened already in MySQL 5.6 and was long overdue.

Surprisingly, something that DOES NOT get enabled by default is Global Transaction Identifiers (GTID). This is also a legacy from decisions taken in MySQL 5.6. Due to the GTID implementation, enabling them by default is not possible when upgrading from a previous version. With new data in a fresh installation, it is safe to enable GTID from the start.


Users


There are two new users when the server is created:

mysql.infoschema
mysql.session 

Theoretically, mysql.session also exists in 5.7, but it was introduced long after GA, so it still qualifies as a novelty.

Then, when the server starts, you get a grand total of 4 users (root and mysql.sys are inherited from MySQL 5.7.)


Mixed oddities


When MySQL initializes, i.e. when the server starts for the first time and creates the database, you will notice some slowness, compared to previous versions. This is in part due to the data dictionary, which needs to create and fill 30 tables, but it is not a big deal in terms of performance. In some systems, though, the slowness is so acute that you start worrying about the server being stuck.

I noticed this problem in my Intel NUC running with SSD storage. In this box, the initialization time took a serious hit:

Version time
5.0.96 1.231s
5.1.72 1.346s
5.5.52 2.441s
5.6.39 5.540s
5.7.21 6.080s
8.0.3 7.826s
8.0.4 38.547s

There is no mistype. The initialization for 8.0.4 lasts 6 times more than 5.7.
This doesn't happen everywhere. On a Mac laptop running on SSD the same operation takes almost 9 seconds, while 5.7 deploys in less than 5. It is still a substantial difference, one that has totally disrupted my regular operations in the NUC. I investigated the matter, and I found the reason. In 8.0, we have a new (hidden) table in the data dictionary, called st_spatial_reference_systems. Up to MySQL 8.0.3, this table was filled using a single transaction containing roughly 5,000 REPLACE INTO statements. It is a lot of data, but it happens quickly. For comparison, in MySQL 8.0.3 the initialization is only 2 seconds slower than 5.7.
The reason for the slowness in 8.0.4 is that there was a new command added to the syntax: CREATE SPATIAL REFERENCE SYSTEM, which is now used 5,000 times to fill the table that was previously filled with a single transaction. I don't know why someone in the MySQL team thought that changing this operation that is hidden from users was a good idea. The data is contained in the server itself and it goes into a data dictionary table, also not visible to users. I am sure I can find at least two methods to load the data faster. I was told that this glitch will be fixed in the next release. I'm waiting.

Speaking of initialization, the mysql_install_db script has been removed for good in 8.0. If you are still using it instead of the recommended mysqld --initialize, you should adapt asap.

This list is far from being exhaustive. I recommend reading What's new in MySQL 8 before upgrading.
If you are impatient, dbdeployer can help you test MySQL 8 quickly and safely.



Wednesday, February 28, 2018

The confusing strategy for MySQL shell

Where the hell is it?

The MySQL shell is a potentially useful tool that has been intentionally made difficult to use properly.

It was introduced, with much fanfare, with the MySQL Document Store, as THE tool to bridge the SQL and no-SQL worlds. The release was less than satisfactory, though: MySQL 5.7.12 introduced a new feature (the X-protocol plugin) bundled with the server. The maturity of the plugin was unclear, as it popped out of the unknown into a GA release, without any public testing. It was allegedly GA quality, although the quantity of bug reports that were filed soon after the release proved otherwise. The maturity of the shell was known as "development preview", and so we had a supposedly GA feature that could only be used with an alpha quality tool.

The situation with the MySQL shell got worse in a few months. A new product was brewing (MySQL Group Replication) and went rapidly from something released in the Labs without docs to being part of the regular server distribution, and it was evolving into a more complex and ambitious project (the InnoDB Cluster) which used the MySQL shell as its main tool.

Since the announcement of InnoDB Cluster, using the MySQL shell has been a nightmare. You saw examples in blog posts and presentations, and when you tried them at home, they did not work. There were different releases of MySQL shell with the same version number but different capabilities, depending on whether they were released through the main downloads site or through the labs.

When I asked why the shell wasn't distributed with the server, like the other tools, I was told that a non-GA product could not be released with a GA server. Considering that the Document Store is still walking around with a Pre-Production status legal notice, this was an odd excuse.

Still, I kept waiting, trying to figure out how to pair a given version of MySQL shell with a given version of the server. Unlike the server, there are no release notes for the shell, so every release was a surprising experience.

Eventually, the MySQL shell reached the GA state, with which merit I can't tell. Given the obstacles in the path to its usage, I doubt it has had any serious testing from the community. Despite the state being GA, it keeps being released separately, leaving the puzzled users with the ungrateful task of determining with which server version that shell could be used safely.

With the upcoming release of MySQL 8.0, a new version of MySQL shell appeared, with a colorful prompt and new features that the GA shell doesn't have. The public perception of the tool keeps getting more confused. In the presentations given by the MySQL team we see the new shell doing wonders, while the GA shell keeps its monochromatic features. Shall I use the 8.0.x shell with a 5.7 server or should I stick with the 1.0 version?

In MySQL 8.0, the situation is still divided. Both products (the server and the shell) are, as of today, not GA yet. It would make sense to finally end the craziness and put the two things together, so that users don't have to hunt around for the right shell version. But the two products are still released separately.


How can I do stuff with MySQL shell?

So far, we have only seen the availability of the shell. What about the functionality?

I have heard that Oracle wants to convert the shell into the only tool to deal with MySQL. I can't prove it, as Oracle doesn't release its development plans to the public, but I can see the emphasis on the shell in talks and articles authored by MySQL team engineers. If this is the plan, I think it needs a lot more work.

If you try to use MySQL shell the same way as the regular "mysql" client, you get in trouble soon.

mysqlsh --user root --password=msandbox --port=5721 --host 127.0.0.1
mysqlx: [Warning] Using a password on the command line interface can be insecure.
Creating a Session to 'root@127.0.0.1:5721'
Your MySQL connection id is 38
Server version: 5.7.21 MySQL Community Server (GPL)
No default schema selected; type \use  to set one.
MySQL Shell 1.0.11

Copyright (c) 2016, 2017, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type '\help' or '\?' for help; '\quit' to exit.

Currently in JavaScript mode. Use \sql to switch to SQL mode and execute queries.

I see two problems here:

  • The warning about the password on the command line is legitimate. The trouble is that there is no alternative. mysqlsh does not support --defaults-file, and there is no way of giving a password other than directly at invocation. There is an option "--passwords-from-stdin" which does not seem to work, and even if it did, I can't see the advantage of using the password from a pipe.
  • The default mode is Javascript. I can see that this makes operations simpler when you want to perform setup tasks for InnoDB Cluster, but certainly doesn't help me to use this tool as the primary drive for database management. There is a "--sql" option that does what I expect, but if this is not the default, I can't see this replacement being very successful.
  • Due to the previous items, using the tool in batch mode (with -e "SQL commands") is impossible, as every invocation will start with the freaking password warning.

I'm afraid that it's too late to take action for MySQL 8.0. The MySQL team is probably packaging the GA release while I write these notes. But I offer some suggestions nonetheless.


Wish list


  1. Package MySQL shell with the server. Past experience shows that the MySQL team keeps adding features into a GA release, thus exposing users to the risk of getting the wrong tool for the job. Having the shell and the server in the same tarball will help users pick the right version for the task. This is similar to what happens with mysqldump: using the tool from 5.5 with a 5.7+ server will not work properly. There is no reason for mysqlsh to be treated differently.
  2. Make sure that all the features of the mysql client work seamlessly in mysqlsh. Perhaps run the test suite replacing mysql with mysqlsh and pick up from there.
  3. Make the MySQL shell compatible with other tools. Specifically, it should support option files (--defaults-file, --defaults-extra-file, --defaults-group-suffix, --no-defaults)

In short, if the plan is to replace mysql with mysqlsh, put the thing in the open, and please make sure it can do what users can reasonably expect.