Saturday, March 31, 2012

21st century presentation technology at Percona Live

After 15 years of slide show technology, I thought that we need to change the way we do presentations. And since I am advocating radical changes, I will eat my own dog food and be the first to present a MySQL session using 3D technology.

Since watching Avatar a few years ago, I thought that using this technology would make my presentations truly amazing. However, two years ago a 3d projector was prohibitively expensive. Now, instead, it is affordable, and fits in my briefcase!

What I needed, though, was a compelling reason for using 3d vs. traditional presentations. And I found it. As I have mentioned recently, I am working with the coolest replication technology on earth. Explaining this technology is often challenging. While regular replication is easy to represent in slides, star and fan-in topologies are hard to grasp for the average attendees. But with the help of 3d technology, the concept looks easy and reachable.

For this reason, I have convinced my company to invest a few thousand dollars in this technology and I am now ready to replace the regular projector in ballroom "C" with the new machine. Sure I will need to drill a few holes in the floor (BTW, thanks to the San Francisco MySQL User Group for lending me the tools), but the result will be fantastic!

I don't want to spoil the surprise, so no more details will be available until you see the result on Tutorial Day.

Now, let's talk about the logistics. In order to follow a 3d presentation, you need special glasses. Since this is a talk about open source stuff, it seems just right that I tell you How to Make Your Own 3D Glasses, so you won't have to buy them. If you are in a hurry, you can get the quick model (Make Your Own 3D Glasses in 10 Seconds).

For those of you who want the enterprise edition, you can buy very fancy 3D glasses at a friendly price (just $14), following the QR link below.

3d glasses

Friday, March 23, 2012

April talks at Percona and SkySQL events

The second week of April will be quite a busy one

Tuesday, April 10

April 10th is Tutorial day at the Percona Live MySQL Conference and Expo.

On that day, I will present a classic: MySQL Replication 101. This is a topic traditionally presented by a MySQL engineer. However, since Oracle seems not to be eager to send anyone to the conference, I volunteered to the task, and I have let everyone know that, if Oracle change its mind and sends some engineers at the conference, I will happily have one of my former colleagues from the replication team as co-speaker.

Wednesday, April 11

The conference will be in full swing when the regular sessions (and the keynotes!) start. From my side, it is noteworthy the talk about Continuent crown jewels, which I have mentioned recently.

Next on the same day, two of my colleagues will take the podium before it's my turn again.

Unfortunately, at the same time, there will be a talk that I will miss, but I would love to see:

After my own talk, I will instead go to see It is not over yet. After the regular schedule, there will be Lightning Talks during the Community Networking Reception.

Thursday, April 12

We will start with two interesting keynotes:

The sessions will start with Another tough choice in the afternoon. I will be on stage while my colleagues will present on yet another cool technology that I have tested extensively in the past months. I will then try to learn something new with

Friday, April 13th

This day brings us the MySQL Solutions day sponsored and organized by SkySQL. I will be on stage with Robert Hodges to talk and demo some of the solutions offered by Continuent.

Thursday, March 22, 2012

Lightning Talks at Percona Live MySQL Conference and Expo 2012

Several months ago I suggested having lightning talks at Percona Live MySQL Conference and Expo 2012, and I also offered to help.

Then I forgot about that for a while, until I saw the announcement that there was a call for Lightning Talks. Great! I submitted two proposals, and asked my colleagues to do the same, and also encouraged many good speakers I know to submit something.

The deadline for lightning talks submission passed, and I was told that my offer to help had been accepted, and I was in charge of lightning talks! OK. I would have preferred being told before the CfP, but an offer to help is an offer to help, and thus I went through the motions of evaluating the talks, sending notices to the winners, consoling the losers, and giving hope to the few brave ones who will replace the winners if they don't show up.

The talks that you will see at the conference are in the Lightning Talks page.

Lightning talks are fun and instructional micro events. Their official purpose is to give the audience a chance to learn something in a very limited amount of time. The real purpose is for the speaker to be as entertaining and memorable as possible within the allocated time.

Here are the official rules:

  1. All slides will be loaded into a single computer, to minimize delays between talks
  2. All speakers will meet 15 minutes before the start, and be given the presentation order. Missing speakers will be replaced by reserve speakers
  3. The speaker will have 5 minutes to deliver the talk.
  4. When one minute is left, there will be a light sound to remind of the remaining time.
  5. When 10 seconds are left, most likely the audience will start chanting the countdown.
  6. when the time is finished, the speaker must leave the place to the next one.

For this to be real fun, there must be some cooperation from the audience. Rule #5 is often a spontaneous behavior from the crowd. It's very effective to make the speaker hurry up and close.

If rule #6 were to be enforced in style, there would be a tele-transporter that is triggered at the last second, and the too-slow speaker is instantly moved to the parking lot. My contact at the Star Trek labs tells me that the appliance is not available yet. We'll see if there is an app for that …

Sunday, March 18, 2012

MySQL 5.6 too verbose when creating data directory

When I install a MySQL package using MySQL Sandbox, if everything goes smoothly, I get an informative message on standard output, and I keep working.

This is OK

$HOME/opt/mysql/5.5.15/scripts/mysql_install_db --no-defaults \
  --user=$USER --basedir=$HOME/opt/mysql/5.5.15 \
  --datadir=$HOME/sandboxes/msb_5_5_15/data \
  --lower_case_table_names=2
Installing MySQL system tables...
OK
Filling help tables...
OK

To start mysqld at boot time you have to copy
support-files/mysql.server to the right place for your system

PLEASE REMEMBER TO SET A PASSWORD FOR THE MySQL root USER !
To do so, start the server, then issue the following commands:

/Users/gmax/opt/mysql/5.5.15/bin/mysqladmin -u root password 'new-password'
/Users/gmax/opt/mysql/5.5.15/bin/mysqladmin -u root -h gmac4.local password 'new-password'

Alternatively you can run:
/Users/gmax/opt/mysql/5.5.15/bin/mysql_secure_installation

which will also give you the option of removing the test
databases and anonymous user created by default.  This is
strongly recommended for production servers.

See the manual for more instructions.

You can start the MySQL daemon with:
cd /Users/gmax/opt/mysql/5.5.15 ; /Users/gmax/opt/mysql/5.5.15/bin/mysqld_safe &

You can test the MySQL daemon with mysql-test-run.pl
cd /Users/gmax/opt/mysql/5.5.15/mysql-test ; perl mysql-test-run.pl

Please report any problems with the /Users/gmax/opt/mysql/5.5.15/scripts/mysqlbug script!
I can actually suppress this output, confident that, if something goes wrong, the error comes to my screen loud and clear. For example, if I try to install to a data directory that is write protected, I get this:
chmod 444 $HOME/sandboxes/msb_5_5_15/data
$HOME/opt/mysql/5.5.15/scripts/mysql_install_db --no-defaults \
  --user=$USER --basedir=$HOME/opt/mysql/5.5.15 \
  --datadir=$HOME/sandboxes/msb_5_5_15/data \
  --lower_case_table_names=2  > /dev/null

mkdir: /Users/gmax/sandboxes/msb_5_5_15/data: Permission denied
chmod: /Users/gmax/sandboxes/msb_5_5_15/data: Permission denied
chown: /Users/gmax/sandboxes/msb_5_5_15/data: Permission denied
This, way, I know that there was an error, it is very clear and readable. I don't need to hunt it down within the regular messages. The standard error is a separate file descriptor, which can be read independently from the standard input.

After fixing permissions:

chmod 755 ~/sandboxes/msb_5_5_15/
$HOME/opt/mysql/5.5.15/scripts/mysql_install_db --no-defaults \
  --user=$USER --basedir=$HOME/opt/mysql/5.5.15 \
  --datadir=$HOME/sandboxes/msb_5_5_15/data \
  --lower_case_table_names=2  > /dev/null
# empty line: means all OK

This is very convenient, and it is the Unix way.

This is not OK

Now, let's try the same with MySQL 5.6

$BASEDIR/scripts/mysql_install_db --no-defaults --user=tungsten \
  --basedir=$BASEDIR --datadir=/home/tungsten/sandboxes/msb_5_6_4/data \
  --tmpdir=/home/tungsten/sandboxes/msb_5_6_4/tmp  > /dev/null
120318 10:10:44 InnoDB: The InnoDB memory heap is disabled
120318 10:10:44 InnoDB: Mutexes and rw_locks use GCC atomic builtins
120318 10:10:44 InnoDB: Compressed tables use zlib 1.2.3
120318 10:10:44 InnoDB: Using Linux native AIO
120318 10:10:44 InnoDB: CPU supports crc32 instructions
120318 10:10:44 InnoDB: Initializing buffer pool, size = 128.0M
120318 10:10:44 InnoDB: Completed initialization of buffer pool
InnoDB: The first specified data file ./ibdata1 did not exist:
InnoDB: a new database to be created!
120318 10:10:44 InnoDB: Setting file ./ibdata1 size to 10 MB
InnoDB: Database physically writes the file full: wait...
120318 10:10:44 InnoDB: Log file ./ib_logfile0 did not exist: new to be created
InnoDB: Setting log file ./ib_logfile0 size to 5 MB
InnoDB: Database physically writes the file full: wait...
120318 10:10:44 InnoDB: Log file ./ib_logfile1 did not exist: new to be created
InnoDB: Setting log file ./ib_logfile1 size to 5 MB
InnoDB: Database physically writes the file full: wait...
InnoDB: Doublewrite buffer not found: creating new
InnoDB: Doublewrite buffer created
120318 10:10:44 InnoDB: 128 rollback segment(s) are active.
InnoDB: Creating foreign key constraint system tables
InnoDB: Foreign key constraint system tables created
120318 10:10:44 InnoDB: 1.2.4 started; log sequence number 0
120318 10:10:44 [Warning] Info table is not ready to be used. Table 'mysql.slave_master_info' cannot be opened.
120318 10:10:44 [Warning] Error while checking replication metadata. Setting the requested repository in order to give users the chance to fix the problem and restart the server. If this is a live upgrade please consider using mysql_upgrade to fix the problem.
120318 10:10:44 [Warning] Info table is not ready to be used. Table 'mysql.slave_relay_log_info' cannot be opened.
120318 10:10:44 [Warning] Error while checking replication metadata. Setting the requested repository in order to give users the chance to fix the problem and restart the server. If this is a live upgrade please consider using mysql_upgrade to fix the problem.
120318 10:10:44 [Note] Binlog end
120318 10:10:44 [Note] Shutting down plugin 'partition'
120318 10:10:44 [Note] Shutting down plugin 'ARCHIVE'
120318 10:10:44 [Note] Shutting down plugin 'BLACKHOLE'
120318 10:10:44 [Note] Shutting down plugin 'PERFORMANCE_SCHEMA'
120318 10:10:44 [Note] Shutting down plugin 'INNODB_SYS_FOREIGN_COLS'
120318 10:10:44 [Note] Shutting down plugin 'INNODB_SYS_FOREIGN'
120318 10:10:44 [Note] Shutting down plugin 'INNODB_SYS_FIELDS'
120318 10:10:44 [Note] Shutting down plugin 'INNODB_SYS_COLUMNS'
120318 10:10:44 [Note] Shutting down plugin 'INNODB_SYS_INDEXES'
120318 10:10:44 [Note] Shutting down plugin 'INNODB_SYS_TABLESTATS'
120318 10:10:44 [Note] Shutting down plugin 'INNODB_SYS_TABLES'
120318 10:10:44 [Note] Shutting down plugin 'INNODB_FT_INDEX_TABLE'
120318 10:10:44 [Note] Shutting down plugin 'INNODB_FT_INDEX_CACHE'
120318 10:10:44 [Note] Shutting down plugin 'INNODB_FT_CONFIG'
120318 10:10:44 [Note] Shutting down plugin 'INNODB_FT_BEING_DELETED'
120318 10:10:44 [Note] Shutting down plugin 'INNODB_FT_DELETED'
120318 10:10:44 [Note] Shutting down plugin 'INNODB_FT_INSERTED'
120318 10:10:44 [Note] Shutting down plugin 'INNODB_FT_DEFAULT_STOPWORD'
120318 10:10:44 [Note] Shutting down plugin 'INNODB_METRICS'
120318 10:10:44 [Note] Shutting down plugin 'INNODB_BUFFER_POOL_STATS'
120318 10:10:44 [Note] Shutting down plugin 'INNODB_BUFFER_PAGE_LRU'
120318 10:10:44 [Note] Shutting down plugin 'INNODB_BUFFER_PAGE'
120318 10:10:44 [Note] Shutting down plugin 'INNODB_CMPMEM_RESET'
120318 10:10:44 [Note] Shutting down plugin 'INNODB_CMPMEM'
120318 10:10:44 [Note] Shutting down plugin 'INNODB_CMP_RESET'
120318 10:10:44 [Note] Shutting down plugin 'INNODB_CMP'
120318 10:10:44 [Note] Shutting down plugin 'INNODB_LOCK_WAITS'
120318 10:10:44 [Note] Shutting down plugin 'INNODB_LOCKS'
120318 10:10:44 [Note] Shutting down plugin 'INNODB_TRX'
120318 10:10:44 [Note] Shutting down plugin 'InnoDB'
120318 10:10:44  InnoDB: FTS optimize thread exiting.
120318 10:10:44  InnoDB: Starting shutdown...
120318 10:10:46 InnoDB: Shutdown completed; log sequence number 1602841
120318 10:10:46 [Note] Shutting down plugin 'CSV'
120318 10:10:46 [Note] Shutting down plugin 'MEMORY'
120318 10:10:46 [Note] Shutting down plugin 'MyISAM'
120318 10:10:46 [Note] Shutting down plugin 'MRG_MYISAM'
120318 10:10:46 [Note] Shutting down plugin 'mysql_old_password'
120318 10:10:46 [Note] Shutting down plugin 'mysql_native_password'
120318 10:10:46 [Note] Shutting down plugin 'binlog'
120318 10:10:46 InnoDB: The InnoDB memory heap is disabled
120318 10:10:46 InnoDB: Mutexes and rw_locks use GCC atomic builtins
120318 10:10:46 InnoDB: Compressed tables use zlib 1.2.3
120318 10:10:46 InnoDB: Using Linux native AIO
120318 10:10:46 InnoDB: CPU supports crc32 instructions
120318 10:10:46 InnoDB: Initializing buffer pool, size = 128.0M
120318 10:10:46 InnoDB: Completed initialization of buffer pool
120318 10:10:46 InnoDB: highest supported file format is Barracuda.
120318 10:10:46 InnoDB: 128 rollback segment(s) are active.
120318 10:10:46 InnoDB: Waiting for the background threads to start
120318 10:10:47 InnoDB: 1.2.4 started; log sequence number 1602841
120318 10:10:47 [Note] Binlog end
120318 10:10:47 [Note] Shutting down plugin 'partition'
120318 10:10:47 [Note] Shutting down plugin 'ARCHIVE'
120318 10:10:47 [Note] Shutting down plugin 'BLACKHOLE'
120318 10:10:47 [Note] Shutting down plugin 'PERFORMANCE_SCHEMA'
120318 10:10:47 [Note] Shutting down plugin 'INNODB_SYS_FOREIGN_COLS'
120318 10:10:47 [Note] Shutting down plugin 'INNODB_SYS_FOREIGN'
120318 10:10:47 [Note] Shutting down plugin 'INNODB_SYS_FIELDS'
120318 10:10:47 [Note] Shutting down plugin 'INNODB_SYS_COLUMNS'
120318 10:10:47 [Note] Shutting down plugin 'INNODB_SYS_INDEXES'
120318 10:10:47 [Note] Shutting down plugin 'INNODB_SYS_TABLESTATS'
120318 10:10:47 [Note] Shutting down plugin 'INNODB_SYS_TABLES'
120318 10:10:47 [Note] Shutting down plugin 'INNODB_FT_INDEX_TABLE'
120318 10:10:47 [Note] Shutting down plugin 'INNODB_FT_INDEX_CACHE'
120318 10:10:47 [Note] Shutting down plugin 'INNODB_FT_CONFIG'
120318 10:10:47 [Note] Shutting down plugin 'INNODB_FT_BEING_DELETED'
120318 10:10:47 [Note] Shutting down plugin 'INNODB_FT_DELETED'
120318 10:10:47 [Note] Shutting down plugin 'INNODB_FT_INSERTED'
120318 10:10:47 [Note] Shutting down plugin 'INNODB_FT_DEFAULT_STOPWORD'
120318 10:10:47 [Note] Shutting down plugin 'INNODB_METRICS'
120318 10:10:47 [Note] Shutting down plugin 'INNODB_BUFFER_POOL_STATS'
120318 10:10:47 [Note] Shutting down plugin 'INNODB_BUFFER_PAGE_LRU'
120318 10:10:47 [Note] Shutting down plugin 'INNODB_BUFFER_PAGE'
120318 10:10:47 [Note] Shutting down plugin 'INNODB_CMPMEM_RESET'
120318 10:10:47 [Note] Shutting down plugin 'INNODB_CMPMEM'
120318 10:10:47 [Note] Shutting down plugin 'INNODB_CMP_RESET'
120318 10:10:47 [Note] Shutting down plugin 'INNODB_CMP'
120318 10:10:47 [Note] Shutting down plugin 'INNODB_LOCK_WAITS'
120318 10:10:47 [Note] Shutting down plugin 'INNODB_LOCKS'
120318 10:10:47 [Note] Shutting down plugin 'INNODB_TRX'
120318 10:10:47 [Note] Shutting down plugin 'InnoDB'
120318 10:10:47  InnoDB: FTS optimize thread exiting.
120318 10:10:47  InnoDB: Starting shutdown...
120318 10:10:49 InnoDB: Shutdown completed; log sequence number 1602851
120318 10:10:49 [Note] Shutting down plugin 'CSV'
120318 10:10:49 [Note] Shutting down plugin 'MEMORY'
120318 10:10:49 [Note] Shutting down plugin 'MyISAM'
120318 10:10:49 [Note] Shutting down plugin 'MRG_MYISAM'
120318 10:10:49 [Note] Shutting down plugin 'mysql_old_password'
120318 10:10:49 [Note] Shutting down plugin 'mysql_native_password'
120318 10:10:49 [Note] Shutting down plugin 'binlog'
Why is this bad? Because you don't see at a glance what is right and what is wrong. All the above messages are printed to the standard error, the kind of output that should be reserved for, well, errors! If the standard error is used for regular messages, you may miss the important error messages that are instead mixed with the "all is OK" messages.

There is Bug#60934 filed about this issue, but it has been considered a feature request, and as such unlikely to be fixed.

In the above text there is something more. There are warnings, mixed with the standard text, telling me of errors that the bootstrap operation is not in a position to fix, like replication metadata and slave_master_info.

MySQL developers, please fix this issue. Users need error messages when there is something wrong, and warning or error messages about something that can actually be fixed. When MySQL 5.6 goes GA, this issue will hit most everybody.

Tuesday, March 13, 2012

MySQL Sandbox at the OTN MySQL Developers day in Paris, March 21st

On March 21st I will be in Paris, to attend the OTN MySQL Developers Day. Oracle is organizing these events all over the world, and although the majority are in the US, some of them are touching the good old European continent. Previous events were an all-Oracle show. Recently, the MySQL Community team has been asking for cooperation from the community, and in such capacity I am also presenting at the event, on the topic of testing early releases of MySQL in a sandbox. Of course, this is one of my favorite topics, but it is quite appropriate in this period, when Oracle has released a whole lot of preview features in its MySQL Labs. Which is another favorite topic of mine, since I was the one who insisted for having the Labs when I was working in the community team. It's nice to see that the labs are still in place, and being put to good use.

MySQL Sandbox

Speaking of sandboxes, I was making some quick tests yesterday, and I installed 15 sandboxes at once (all different versions, from 5.0.91 to 5.6.5). Installing a single sandbox, depending on the version, takes from 5 to 19 seconds. Do you know how long it takes to install 15 sandboxes, completely, from tarball to working conditions? It takes 19 seconds. How's so? It's because I have been working at a large project where we are dealing with many replicated clusters spread across three continents. Administering these clusters is a problem in itself, and so we are using tools to do our work in parallel. At the same time, using a host with a fast 16 core CPU I can install many sandboxes at once. It's a real joy to see software behaving efficiently the way it should! It works so fast, in fact, that I found a race condition bug. If you install more than one sandbox at once, the MySQL bootstrap process may try to open the same temporary file from two different servers. That's because I did not indicate a dedicated temporary directory for the bootstrap (I was using one only for the installed sandbox). When this happens, you may find that instead of 15 sandboxes you have installed only 9 or 11. So I fixed the bug, by adding --tmpdir to mysql_install_db, and now you can install more than one sandbox in parallel.

Thursday, March 08, 2012

Bringing fresh blood to the MySQL Council

The MySQL Council has been operational for more than one year, with mixed fortunes. As an early member, I am glad to be a member, and I feel that I have done something useful by participating in this organism. The council is young, and the MySQL community is variegated and sparse. The enthusiasm of the first volunteers must be reinforced by the injection of new members, who can contribute fresh views and keep the council on their toes. I am glad to see more names coming up, and, as some of my fellow council members, I volunteered to serve for another year. But this doesn't need to be my decision. I will stay unless we get feedback on the contrary, and the same will be for the other people in the list below. (1) Please use the feedback form to voice your ideas and suggestions about the new council. Anything goes: who should be or not be there, what you would like the council to do, and how the next council should be created.

Mysql village

First Name
Last Name
Title
Company
Country
Alexandre
Almeida
CTO
HTI Tecnologia
Brazil
Sheeri
Cabral
DBA/Architect
Mozilla
United States
Laine
Campbell
Principal and CEO
PalominoDB
United States
Patrick
Galbraith
Senior Engineer Cloud Data Services
Hewlett Packard
United States
Bradley
Kuszmaul
Chief Architect
Tokutek
United States
Giuseppe
Maxia
Director of Quality Assurance
Continuent
Italy
Sarah
Novotny
CIO
Meteor Entertainment
United States
Marco
Tusa
Cluster Technical Leader
Pythian
Canada
Daniel
van Eeden
Consultant
Snow B.V.
Netherlands
The deadline for feedback is March 23, 2012. Let your voice be heard!

(1) Just for the record: all the people in this list have volunteered, in answer to a public call for candidates.

Tuesday, March 06, 2012

Cool technology and usability in Tungsten Enterprise

When I joined Continuent, at the end of 2010, I was fascinated by the technology of its core products. Readers of this blog know that I have had my hands full with Tungsten Replicator, but what really turned me on was the flagship management suite, Tungsten Enterprise. After hammering at it for several months, and always marveling at the beauty of its technology, let me give a tour of the suite, so that you'll understand what's so exciting about it. First off, Tungsten Enterprise is not simply a replication tool. It is based on replication, but it is mostly a data management suite. Its aim is to reduce complexity for the user and to show a database cluster to the user as if it were a single server, always on, no matter what happens. The most amazing things that you will see in Tungsten Enterprise are
  • Automatic failover
  • Cluster console, and One-Command-Operations
  • transparent connections
  • No VIPs !!!
  • Multi site switch and failover

Automatic failover

This is probably the most amazing feature of all. It is a combination of the same efficient replication technology seen in Tungsten Replicator, which uses a global transaction ID to allow a seamless failover, and a management system, made of components that communicate to each other and can replace a failed master within seconds, even under heavy load. All this, without the application having more trouble than a few seconds delay (see transparent connections below). This feature is customizable. If the manager is in "automatic" mode, it will replace a failed master without manual intervention, and it will try to put online every element that goes offline. In "manual" mode, however, it will let the user take control of operations as needed.

Cluster Console, and One-Command-Operations

Tungsten Enterprise comes with a text-based console that gives immediate access to the cluster information, and lets the users perform maintenance without troubling them with the inner knowledge necessary to perform the tasks. Promoting a slave to master (a planned "switch", as opposed to an unplanned "failover") is just one command, even though behind the scenes the Tungsten Manager runs a dozen commands to complete the task safely. Backup and restore are also one command. And so are all the dozens of administrative tasks that the Tungsten Manager allows the user. The console comes with a comprehensive help that explains all commands in detail. The console allows the DBA to perform operations in any text terminal, without additional components such as a desktop application or a web interface.

Tungsten enterprise overview

Transparent connections

The suite includes a component called Tungsten Connector, which is a sort of high performance proxy between the application and the database. Instead of connecting your applications to the DBMS, you connect it to a Tungsten Connector, which looks and feels as a MySQl (or PostgreSQL) database. The difference is that, when the master changes, the connector will get notified by the Tungsten Manager and immediately re-routes the underlying connections to the appropriate server. Depending on how smart is your application, you can use the Tungsten Connector in two ways:
  • Static routing mode: You create one (or more) connector that will always bring you to the master, and use that connection whenever your application needs to write. And you also create one or more connectors that will always give you access to a slave, and use this one whenever your application needs to read.
  • Smart mode: you ask the connector to detect what you are doing and direct your queries to the appropriate server. This mode sends all transactions and updates to the master, and every read query that is not inside a transaction to the slaves. This mode can also guarantee data consistency, by directing the reading of a just saved record to a slave that has already received that record.
The connector can also do more interesting and even amazing things, such as showing you its status as a SQL query (what fun in being a proxy if you don't take advantage of it?) and allowing on-the-fly changes of policy for an existing connector using command line or SQL parameters. The connector plays well with well designed applications, such as the ones that retry a failed transaction rather than failing, and the ones that are replication-aware and can split reads and writes between connections. But it also plays well with applications that have been designed for a single server, without scalability in mind. In most cases, you replace a single server with a cluster of Tungsten Enterprise, and you are in business.

No VIPs !!!

The failover and switch features are not new in the replication arena. There are tools that do something similar and keep an application connected to the same IP using virtual IPs. I don't like virtual IPs, as they are dumb stateless components between two stateful elements, and I am not the only one who dislikes them (See Virtual IP Addresses and Their Discontents for Database Availability). Using Tungsten Connector instead of a dumb virtual IP makes life so much easier. When you do a failover with a VIP, quite often the application hangs, as the client doesn't detect that the server on the other side has gone away, and thus your failover technology has to somehow identify the hanging connections and cut them: a very painful experience. Instead, the Tungsten Connector will either kill the connection immediately or reroute your query, depending on the needs, and your application doesn't get more than a hiccup.

Composite data services

Multi site switch and failover

A recent addition to the suite is the ability of handling whole sites as single servers. The suite can create and maintain a so called composite data service, which is a cluster that is seen and treated as a single server. in a disaster recovery scenario, you want to have a functioning site in one location and a relay site in another location, ready to take over when sudden disaster strikes. Here's an example of what you can get:
cctrl -multi -expert 
Tungsten Enterprise 1.5.0 build 426
sjc: session established
[LOGICAL:EXPERT] / > ls
+----------------------------------------------------------------------------+
|DATA SERVICES:                                                              |
+----------------------------------------------------------------------------+
great_company
nyc
sjc

[LOGICAL:EXPERT] / > use great_company
[LOGICAL:EXPERT] /great_company > ls

COORDINATOR[qa.tx6.continuent.com:AUTOMATIC:ONLINE]

DATASOURCES:
+----------------------------------------------------------------------------+
|nyc(composite master:ONLINE)                                                |
|STATUS [OK] [2012/03/05 10:59:28 PM CET]                                    |
+----------------------------------------------------------------------------+

+----------------------------------------------------------------------------+
|sjc(composite slave:ONLINE)                                                 |
|STATUS [OK] [2012/03/05 10:59:30 PM CET]                                    |
+----------------------------------------------------------------------------+
In this scenario, there is a data service called "great_company", which contains two sites that loom like regular servers. Inside each site, there is a cluster, which we can examine at will:
[LOGICAL:EXPERT] /great_company > use nyc
nyc: session established
[LOGICAL:EXPERT] /nyc > ls

COORDINATOR[qa.tx2.continuent.com:AUTOMATIC:ONLINE]

DATASOURCES:
+----------------------------------------------------------------------------+
|qa.tx1.continuent.com(master:ONLINE, progress=1397, THL latency=0.857)      |
|STATUS [OK] [2012/03/05 10:58:42 PM CET]                                    |
+----------------------------------------------------------------------------+
|  MANAGER(state=ONLINE)                                                     |
|  REPLICATOR(role=master, state=ONLINE)                                     |
|  DATASERVER(state=ONLINE)                                                  |
|  CONNECTIONS(created=54, active=0)                                         |
+----------------------------------------------------------------------------+

+----------------------------------------------------------------------------+
|qa.tx2.continuent.com(slave:ONLINE, progress=1397, latency=0.000)           |
|STATUS [OK] [2012/03/05 11:02:19 PM CET]                                    |
+----------------------------------------------------------------------------+
|  MANAGER(state=ONLINE)                                                     |
|  REPLICATOR(role=slave, master=qa.tx1.continuent.com, state=ONLINE)        |
|  DATASERVER(state=ONLINE)                                                  |
|  CONNECTIONS(created=0, active=0)                                          |
+----------------------------------------------------------------------------+

+----------------------------------------------------------------------------+
|qa.tx3.continuent.com(slave:ONLINE, progress=1397, latency=0.000)           |
|STATUS [OK] [2012/03/05 10:58:31 PM CET]                                    |
+----------------------------------------------------------------------------+
|  MANAGER(state=ONLINE)                                                     |
|  REPLICATOR(role=slave, master=qa.tx1.continuent.com, state=ONLINE)        |
|  DATASERVER(state=ONLINE)                                                  |
|  CONNECTIONS(created=0, active=0)                                          |
+----------------------------------------------------------------------------+
There is a master and two slaves. For each server, we can see the vitals at a glance. The relay site offers a similar view, with the distinction that, instead of a master, there is a relay server. All changes coming from the master in the main site will also go to the relay server, and from that to the slaves in the second site.
[LOGICAL:EXPERT] /nyc > use sjc 
[LOGICAL:EXPERT] /sjc > ls

COORDINATOR[qa.tx6.continuent.com:AUTOMATIC:ONLINE]

DATASOURCES:
+----------------------------------------------------------------------------+
|qa.tx6.continuent.com(relay:ONLINE, progress=1397, THL latency=4.456)       |
|STATUS [OK] [2012/03/05 10:58:32 PM CET]                                    |
+----------------------------------------------------------------------------+
|  MANAGER(state=ONLINE)                                                     |
|  REPLICATOR(role=relay, master=qa.tx1.continuent.com, state=ONLINE)        |
|  DATASERVER(state=ONLINE)                                                  |
|  CONNECTIONS(created=0, active=0)                                          |
+----------------------------------------------------------------------------+

+----------------------------------------------------------------------------+
|qa.tx7.continuent.com(slave:ONLINE, progress=1397, latency=0.000)           |
|STATUS [OK] [2012/03/05 10:59:03 PM CET]                                    |
+----------------------------------------------------------------------------+
|  MANAGER(state=ONLINE)                                                     |
|  REPLICATOR(role=slave, master=qa.tx6.continuent.com, state=ONLINE)        |
|  DATASERVER(state=ONLINE)                                                  |
|  CONNECTIONS(created=0, active=0)                                          |
+----------------------------------------------------------------------------+

+----------------------------------------------------------------------------+
|qa.tx8.continuent.com(slave:ONLINE, progress=1397, latency=0.000)           |
|STATUS [OK] [2012/03/05 10:58:30 PM CET]                                    |
+----------------------------------------------------------------------------+
|  MANAGER(state=ONLINE)                                                     |
|  REPLICATOR(role=slave, master=qa.tx6.continuent.com, state=ONLINE)        |
|  DATASERVER(state=ONLINE)                                                  |
|  CONNECTIONS(created=0, active=0)                                          |
+----------------------------------------------------------------------------+
If you need to bring the relay site as the main one, all you need to do is to run a switch command:
[LOGICAL:EXPERT] /sjc > use great_company 
[LOGICAL:EXPERT] /great_company > switch
SELECTED SLAVE: 'sjc@great_company'
FLUSHING TRANSACTIONS THROUGH 'qa.tx1.continuent.com@nyc'
PUT THE NEW MASTER 'sjc@great_company' ONLINE
PUT THE PRIOR MASTER 'nyc@great_company' ONLINE AS A SLAVE
SWITCH TO 'sjc@great_company' WAS SUCCESSFUL
[LOGICAL:EXPERT] /great_company > ls

COORDINATOR[qa.tx6.continuent.com:AUTOMATIC:ONLINE]

DATASOURCES:
+----------------------------------------------------------------------------+
|nyc(composite slave:ONLINE)                                                 |
|STATUS [OK] [2012/03/06 09:44:48 AM CET]                                    |
+----------------------------------------------------------------------------+

+----------------------------------------------------------------------------+
|sjc(composite master:ONLINE)                                                |
|STATUS [OK] [2012/03/06 09:44:47 AM CET]                                    |
+----------------------------------------------------------------------------+
If disaster strikes, instead of "switch" you say failover, and then use the relay site transparently. Did I mention that Tungsten Connector can be configured to use a composite data service transparently? It can, and if you switch your operations from one Coast to another, the applications will follow suit without any manual intervention. It's so cool! I am sure any geek must love it! BTW: this is not a toy application. This suite is handling data centers that are huge by any standard you care to use, with 100+ terabyte moved through this technology.

Usability

The experience gained with the installer for Tungsten Replicator has been very useful for the whole team. Using the same technology, we have now created a more advanced and simpler installation tool, which is summarized in the Tungsten Enterprise Cookbook. Installing a complex cluster has never been easier!

Tungsten 2.0.5 with more power and ease of use

Tungsten Replicator 2.0.5 was released this week-end. The release notes have quite a long list of bug fixes. Thanks to all the ones who have submitted bug reports, and fixes! There are a couple of new features as well. The replicator includes now a slave prefetch service. Unlike parallel replication, this feature works fine with a single database, and provides performance improvements that in many cases solve the slave lagging problems. This was a bitch of a feature to get right. Many have tried it, many have experienced various degrees of success, and several failures. We started with the bold assertiveness of the brave after an exciting talk at Percona Live in October, and I was sorry to report one bad performance result after the other for a few months, until finally the tide turned, and the good results started showing up, and improving! The key to success was the realization that the prefetch is hard to set up and tune right, but also the need for multiple threads that do the pre-fetching efficiently. Since we had already an efficient engine that we use for parallel replication, the final design started bearing fruits at the end of January, and became definitely good and reliable in February. The other noteworthy improvements were made in the installer. Thanks to the many users who have tried it and reported usability issues, we have made the Tungsten Replicator installation a much better experience, and a powerful tool. The best proof of the installer maturity is that the prefetch installation required little work to be implemented and it worked flawlessly at the first attempt! Other improvements in the tools include trepctl and thl better understanding of their environment. They no longer require a service name if there is only one installed in a given host, and they provide more instrumentation for parallel replication, pre-fetching, and for the processing of huge transactions (quite common when dealing with RBR). This version was also the first with Oracle to MySQL support. This is not open source, however. As this feature requires substantial investments, it is not possible to release it as the rest of the replicator. But the list of goodies is not over yet. The feature that probably more than anything else has been used in the past months has been the star schema topology, which is something that was probably possible in 2.0.4, but nobody had tried it before.
Tungsten topologies
We are not stopping here, however. The investment in the installer has given us the know-how necessary to improve and simplify the installation of our flagship product (Tungsten Enterprise) which is about to ship with similar usability enhancements. We have plans to enhance multiple master replication and management, we are developing powerful parallel processing administration tools, and we are also trying to simplify the powerful filters that Tungsten provides. There are more open source releases to discuss, but these will require more than one article to be described conveniently. We have released more tools in the Tungsten Toolbox project. A better Tungsten Sandbox, capable of installing every technology, and some more ancillary tools for Tungsten. I will come back to those in the near future. Much as I like coding, I also like talking about the cool things that we have made. And, another thing that kept me busy and happy: Continuent and SkySQl are now partners. This has given me quite a lot of work, since we had deliver training to SkySQL field operatives. It was a beautiful experience (teaching to a class of advanced users always is) also because most of the attendees were my former colleagues at MySQL AB. The future looks good. More to come.