- the local-service-name, a.k.a. who's the boss, will tell the replicator that this service will live with the 'tsandbox' service, which is the local master.
- the role tells the replicator that this service will fetch data;
- the master-thl-host is the address where to find a master capable of feeding data to this new slave;
- the master-thl-port and thl-port options make sure that the service uses one port for its own dispatching and another one to get data from the master.
cd $HOME/tsb2/db1 ./tungsten/tools/configure-service -C \ --local-service-name=tsandbox \ --thl-port=12111 \ --role=slave \ --service-type=remote \ --master-thl-host=r1 \ --master-thl-port=2112 \ --datasource=127_0_0_1 \ --svc-start \ dragonAfter this connection, every change in the first cluster master will be replicated to all its slaves, one of which happens to be a master, which will then distribute the same data to all its slaves. So we have a cascade hierarchical replication cluster, similar to what we can have with MySQL native replication. But Tungsten can do something more than that. In MySQL replication, you need to enable a slave to become a relay-slave. In Tungsten, you don't need to do it. Using a very similar command, I can connect to a slave of the first cluster, instead of the master, and the final result will be exactly the same.
cd $HOME/tsb2/db1 ./tungsten/tools/configure-service -C \ --local-service-name=tsandbox --thl-port=12111 \ --role=slave \ --service-type=remote \ --master-thl-host=r3 \ --master-thl-port=2112 \ --datasource=127_0_0_1 \ --svc-start \ dragonIn my presentations, I call this feature "slave with an attitude". Thanks to Tungsten global transaction ID, a slave can request data to any host. Since the data is not labeled in terms of log files and position (as it is in MySQL), but in terms of sequence numbers, a slave ch ask any server for a given sequence number, and that number identifies a transaction unequivocally.