Open Source Software Technical Articles

Want the Best of the Wazi Blogs Delivered Directly to your Inbox?

Subscribe to Wazi by Email

Your email:

Connect with Us!

Current Articles | RSS Feed RSS Feed

Better database replication with MySQL Utilities


Configuring and maintaining healthy replication in MySQL involves the execution of long, complex commands. With help from MySQL Utilities, however, administering replication can be simpler, more straightforward, and less error-prone.

Oracle's MySQL Utilities works with MySQL (client, server, and libraries) version 5.6 and above. You can learn how to install and configure the latest MySQL version on CentOS from the article Improve replication reliability and performance with MySQL 5.6.

We introduced MySQL Utilities in Use MySQL Utilities for better MySQL administration. Read that article to learn how to set up the software. Be sure you save your server logins for each server (--login-path) with a command such as mysql_config_editor set --login-path=server1 --host= --user=root --password.

How to create a new slave

Before you replicate to a new slave server you must first create a valid replication user. Execute a MySQL command like GRANT REPLICATION SLAVE ON *.* TO replication_user IDENTIFIED BY 'password' on the slave server.

To set up replication for the first time, use the command /usr/bin/mysqlreplicate. It accepts three arguments: --master for the master host, --slave for the slave host, and --rpl-user for a valid replication user. For example, to set up replication with master server1 and slave server2, use the command mysqlreplicate --master=server1 --slave=server2 --rpl-user=replication_user:password.

Note that in the above example, server1 and server2 are not just hostnames. Instead, they are the login paths that you have previously set with the command mysql_config_editor. If you didn't save these login paths you must use the full syntax: user:password@hostname.

If the replication setup completes successfully you should see output that looks like:

# master on ... connected.
# slave on ... connected.
# Checking for binary logging on master...
# Setting up replication...
# ...done.

If the command exits with an error or warning, however, you should check MySQL's error log file for more details. You can find it in the directory /var/lib/mysql/ under the name of your host, with the .err extension.

You can use the mysqlreplicate command to set more replicating servers, including bidirectional, dual-master replication between server1 and server2. To do that, simply change the places of the master and the slave server in the command: mysqlreplicate --master=server2 --slave=server1 --rpl-user=replication_user:password.

To set up replication using these commands you don't have to execute them on each MySQL server – you can work only from one server where MySQL Utilities is installed. This allows you to centrally manage replication. You can even implement this functionality in scripts thanks to the simple and straightforward arguments that the mysqlreplicate command accepts.

How to check and monitor MySQL replication

One of the trickiest parts of configuring replication is to make sure that two MySQL servers are properly configured and compatible with each other. You have to go through a checklist for each server, such as checking whether the binary logging is enabled, and in some cases you have to compare the results between servers – for example, you must have unique server IDs for valid replication to be established. And once replication is established you have to monitor it and ensure that the slave replicates without delay and errors.

For all of the above checks you can use the command /usr/bin/mysqlrplcheck. This script accepts --master and --slave arguments for specifying the master and the slave servers for which you want to check the replication status.

Here is example output for checking replication between servers server1 and server2:

/usr/bin/mysqlrplcheck --master=server1 --slave=server2
# master on ... connected.
# slave on ... connected.
Test Description                                                     Status
Checking for binary logging on master                                [pass]
Are there binlog exceptions?                                         [pass]
Replication user exists?                                             [pass]
Checking server_id values                                            [pass]
Checking server_uuid values                                          [pass]
Is slave connected to master?                                        [pass]
Check master information file                                        [pass]
Checking InnoDB compatibility                                        [pass]
Checking storage engines compatibility                               [pass]
Checking lower_case_table_names settings                             [pass]
Checking slave delay (seconds behind master)                         [pass]

Here, not only are the two servers compatible in regards to replication, but the slave server is up-to-date with the master. If there is a problem with replication the status will change from [pass] to [FAIL]. For example, if the slave server is not up-to-date you will see a status such as:

Checking slave delay (seconds behind master)                         [FAIL]
Slave is behind master.
Master binary log file = master-binlog.000002
Master binary log position = 120
Slave is reading master binary log file = master-binlog.000001
Slave is reading master binary log position = 1036

This output shows the exact positions of the master and the slave, which are obviously not the same. This is useful information, but it's not sufficient to resolve the problem and allow the slave to catch up with the master. Therefore, in such cases you should go to the slave server and execute the command SHOW SLAVE STATUS to get the exact reason for the lag.

How to automate failover

The most advanced tool from the MySQL Utilities is the automatic failover script /usr/bin/mysqlfailover. Its purpose is to promote a new master from a list of candidate slave servers when the primary master fails. The failover script accepts several arguments as options:

  • master – specifies the connection details for the master server. This information is used to check the health of the master.
  • slaves – specifies the connection details for the slave servers. Their health is also monitored.
  • candidates – lists the slave servers that should be promoted if the primary master fails.
  • daemon – forks the script into background and runs it as a daemon. To start the daemon specify --daemon=start, and to stop it --daemon=stop.
  • failover-mode – specifies the failover mode. For automatic specify auto (default) or fail for no failover.
  • log – directs the output to a log file that you can check to learn what the script is doing.
  • exec-before and exec-after – execute a script before and after the failover. You can prepare for the failover with an exec-before script, and inform your applications of the new master with the exec-after script.

For example, mysqlfailover --master=server1 --slaves=server2,server3 --candidates=server2 --daemon=start --log=failover.log indicates that the master to be monitored is server1 with slaves server2 and server2. Server2 is specified as a candidate to replace the master in case of a failure. The script is started as a daemon and the output goes to the file failover.log.

When you tail the log file you should see output similar to:

2013-09-08 22:06:04 PM INFO host:, port: 3306, role: MASTER, state: UP, gtid_mode: ON, health: OK
2013-09-08 22:06:04 PM INFO host:, port: 3306, role: SLAVE, state: UP, gtid_mode: ON, health: OK
2013-09-08 22:06:04 PM INFO host:, port: 3306, role: SLAVE, state: UP, gtid_mode: ON, health: OK

This indicates that the master is up and running and so are the slaves. If the master MySQL server fails, the following verbose output would appear in the log file:

2013-09-08 22:06:55 PM INFO Master may be down. Waiting for 3 seconds.
2013-09-08 22:06:58 PM INFO Cannot reconnect to master.
2013-09-08 22:07:01 PM CRITICAL Master is confirmed to be down or unreachable.
2013-09-08 22:07:01 PM INFO Failover starting in 'auto' mode...
2013-09-08 22:07:01 PM INFO Candidate slave will become the new master.
2013-09-08 22:07:01 PM INFO Checking slaves status (before failover).
2013-09-08 22:07:01 PM INFO Preparing candidate for failover.
2013-09-08 22:07:01 PM INFO Creating replication user if it does not exist.
2013-09-08 22:07:01 PM INFO Stopping slaves.
2013-09-08 22:07:01 PM INFO Performing STOP on all slaves.
2013-09-08 22:07:01 PM INFO Switching slaves to new master.
2013-09-08 22:07:02 PM INFO Disconnecting new master as slave.
2013-09-08 22:07:02 PM INFO Starting slaves.
2013-09-08 22:07:02 PM INFO Performing START on all slaves.
2013-09-08 22:07:02 PM INFO Checking slaves for errors.
2013-09-08 22:07:02 PM INFO Failover complete.
2013-09-08 22:07:07 PM INFO Unregistering existing instances from slaves.
2013-09-08 22:07:07 PM INFO Registering instance on new master
2013-09-08 22:07:07 PM INFO Master Information
2013-09-08 22:07:07 PM INFO Binary Log File: master-binlog.000001, Position: 151, Binlog_Do_DB: N/A, Binlog_Ignore_DB: N/A
2013-09-08 22:07:07 PM INFO GTID Executed Set: None
2013-09-08 22:07:07 PM INFO Getting health for master:
2013-09-08 22:07:07 PM INFO Health Status:
2013-09-08 22:07:07 PM INFO host:, port: 3306, role: MASTER, state: UP, gtid_mode: ON, health: OK
2013-09-08 22:07:07 PM INFO host:, port: 3306, role: SLAVE, state: UP, gtid_mode: ON, health: OK

As the above output shows, a new master is elected – namely the previously specified candidate server2 with IP The failover script automatically takes care of all the necessary actions for the proper configuration of the new replication design.

As you can see, MySQL Utilities gives you some useful tools to configure and monitor MySQL replication. Working with MySQL Utilities ensures that you follow best practices and gives you powerful options such as the one for automatic failover.

This work is licensed under a Creative Commons Attribution 3.0 Unported License
Creative Commons License.


Great article. Thanks for the heads up. I had no idea they added this functionality in 5.6.
Posted @ Monday, October 07, 2013 5:21 PM by Matthew Lenz
Thanks for this informative article. 
Before to go in production, I think that some other info are missing: 
1. How to perform the fail-back? What happen when server1 is restored ? 
2. How to configured the JDBC driver to automatically switch to the new master in case of failure? 
Posted @ Tuesday, October 08, 2013 2:36 AM by John Boydon
Hi John, 
1. The failback should be performed manually because it's more complex especially when there has been a real disaster and the original master might have been corrupted. 
2. For the JDBC driver look for instructions such as: 
However, you could try to write a script that changes the IP or the fqdn of the master. This change might be either in a file or in the DNS.
Posted @ Tuesday, October 08, 2013 2:49 AM by Anatoliy
@john: for the second question, one use a program such as HAProxy to assign a virtual ip to the cluster, this would allow for the fail over to the secondary master automatically, and when connecting to the cluster use the virtual ip to connect to. Based on load and usage, this would allow for the cluster to continue even with one master.  
For the first question, you can never bring the first master back to original status, or you would have a split-brain. Only adding another(third) master, after the first fail, would you be able to continue with a multi-master cluster. 
Posted @ Thursday, October 10, 2013 4:38 PM by Joseph
Post Comment
Website (optional)

Allowed tags: <a> link, <b> bold, <i> italics