Back to MySQL Fabric! Travel, conferences, and other work have kept me away for too long. So lets start by setting up a new High Availability MySQL Fabric farm and query it for some data. I started with a new install to make sure I was getting everything covered so that one day I will wrap all this up in a comprehensive ‘cookbook’ style article so that anyone can go from Zero to Fabric Hero.
So the first step was to set up my test system with a fresh install of Ubuntu 14.04. Next was the setup of three ‘guest’ Unbuntu 14.04 virtual servers with Vagrant that will comprise the actual HA farm. Next came the installation of MySQL 5.6.23 from the MySQL Apt Repository on the Fabric Controller and the virtual servers. Next comes the installation of the MySQL Utilities on the Fabric Controller system along with the Python Connector. And this time I went with the MySQL Utilities 1.6.1-alpha to get the latest and greatest version of the Fabric software and the Utilities. Both the Fabric and Utilities team have been very busy working the coding magic and it shows.
Next comes setting up Replication between three guest servers. I set up the account for replication and the grants but did not start replication. I installed the good ol’ World database on the first of the virtual servers and then used mysqldbcopy to copy the database over to the other two virtual servers in a quick fashion. Now the replication farm was complete and it was time to get back to the Fabric controller system.
Editing the /etc/mysql/fabric.cfg file, I put in a handy password where indicated. Then came the moment of truth — setting up Fabric and getting it running.
On the system that will be the Fabric Controller run the
following:
fabric manage setup
[INFO] 1427317836.840587 - MainThread - Initializing persister:
user (fabric), server (localhost:3306), database (fabric).
[INFO] 1427317846.353911 - MainThread - Initial password for
admin/mysql set
Password set for admin/mysql from configuration file.
[INFO] 1427317846.388624 - MainThread - Password set for
admin/mysql from configuration file.
[INFO] 1427317846.389658 - MainThread - Initial password for
admin/xmlrpc set
Password set for admin/xmlrpc from configuration file.
[INFO] 1427317846.430082 - MainThread - Password set for
admin/xmlrpc from configuration file.
The 1.6.1-alpha code has a new documentation page 8.2.3.1. Create the Associated MySQL Users for setting up various Fabric users. There are now accounts for Backing Store, Server User, Backup User, and Restore User and the recommended grants for each. Here I found a glitch as later on I had complaints that various users, created as per the docs, did not have all the permissions they needed. So I took a not-recommended-for-production-or-sanity shot cut and did a GRANT ALL on the account.
The fabric manage setup command creates a database named fabric. Currently there are nineteen tables in the database. I will have to blog about the contents in another post.
Now the Fabric magic can begin. Issue sudo mysqlfabric
manage start --daemonize
to get Fabric started and then we
can add a group. For this example the farm will be called
myfarm. mysqlfabric group create myfarm
,
use group lookup_servers myfarm
to be certain the
group is as desired.
Add servers to the group with mysqlfabric group add myfarm
10.10.13.x
for each of the virtual
servers. I set the server_ids to the last IP address
number, so 10.10.13.10 was sever_id 10 and so forth.
Now it is time to see how the various severs on the farm are
getting along.
mysqlfabric group lookup_servers myfarm
Fabric UUID: 5ca1ab1e-a007-feed-f00d-cab3fe13249e
Time-To-Live: 1
server_uuid address status mode weight
------------------------------------ ----------- ---------
--------- ------
09d13be1-cdd0-11e4-8643-0800273b9c80 10.10.13.10 SECONDARY
READ_ONLY 1.0
0f611996-cdd0-11e4-8643-0800273b9c80 10.10.13.20 SECONDARY
READ_ONLY 1.0
11aae7e7-cdd0-11e4-8643-0800273b9c80 10.10.13.30 SECONDARY
READ_ONLY 1.0
Note all the servers are SECONDARY and READ-ONLY.
Was the replication okay? A quick peek to make sure all was
covered in that regard.
mysqlrpladmin --master=root:hidave@10.10.13.30
--slaves=root:hidave@10.10.13.10,root:hidave@10.10.13.20
health
WARNING: Using a password on the command line interface can be
insecure.
# Checking privileges.
#
# Replication Topology Health:
+--------------+-------+---------+--------+------------+---------+
| host | port | role | state | gtid_mode | health |
+--------------+-------+---------+--------+------------+---------+
| 10.10.13.30 | 3306 | MASTER | UP | ON | OK |
| 10.10.13.10 | 3306 | SLAVE | UP | ON | OK |
| 10.10.13.20 | 3306 | SLAVE | UP | ON | OK |
+--------------+-------+---------+--------+------------+---------+
# ...done.
Replication looks just spiffy!
Time to create some magic by getting our HA group online.
mysqlfabric group promote myfarm
Fabric UUID: 5ca1ab1e-a007-feed-f00d-cab3fe13249e
Time-To-Live: 1
uuid finished success result
------------------------------------ -------- -------
------
c4a60dd8-4bc5-4b52-9f6c-dd5443a95ce8 1 1 1
state success when description
----- ------- -------------
------------------------------------------------------------------
3 2 1427828257.29 Triggered by .
4 2 1427828257.33 Executing action (_define_ha_operation).
5 2 1427828257.47 Executed action (_define_ha_operation).
3 2 1427828257.44 Triggered by .
4 2 1427828257.47 Executing action (_find_candidate_fail).
5 2 1427828257.6 Executed action (_find_candidate_fail).
3 2 1427828257.56 Triggered by .
4 2 1427828257.6 Executing action (_check_candidate_fail).
5 2 1427828257.68 Executed action (_check_candidate_fail).
3 2 1427828257.64 Triggered by .
4 2 1427828257.68 Executing action (_wait_slave_fail).
5 2 1427828257.84 Executed action (_wait_slave_fail).
3 2 1427828257.79 Triggered by .
4 2 1427828257.84 Executing action (_change_to_candidate).
5 2 1427828258.17 Executed action (_change_to_candidate).
Hmmm, a lot of information there but IS IT WORKING?? Quick run
mysqlfabric group lookup_servers myfarm
again!
mysqlfabric group lookup_servers
myfarm
Fabric UUID: 5ca1ab1e-a007-feed-f00d-cab3fe13249e
Time-To-Live: 1
server_uuid address status mode weight
------------------------------------ ----------- ---------
---------- ------
09d13be1-cdd0-11e4-8643-0800273b9c80 10.10.13.10 SECONDARY
READ_ONLY 1.0
0f611996-cdd0-11e4-8643-0800273b9c80 10.10.13.20 SECONDARY
READ_ONLY 1.0
11aae7e7-cdd0-11e4-8643-0800273b9c80 10.10.13.30 PRIMARY
READ_WRITE 1.0
Yes! We no have a PRIMARY and a pair of SECONDARY servers. But are they healthy?
mysqlfabric group health myfarm
Fabric UUID: 5ca1ab1e-a007-feed-f00d-cab3fe13249e
Time-To-Live: 1
uuid is_alive status is_not_running is_not_configured
io_not_running sql_not_running io_error sql_error
------------------------------------ -------- ---------
-------------- ----------------- -------------- ---------------
-------- ---------
09d13be1-cdd0-11e4-8643-0800273b9c80 1 SECONDARY 0 0 0 0 False
False
0f611996-cdd0-11e4-8643-0800273b9c80 1 SECONDARY 0 0 0 0 False
False
11aae7e7-cdd0-11e4-8643-0800273b9c80 1 PRIMARY 0 0 0 0 False
False
And then the High Availability monitoring is started by
mysqlfabric group activate myfarm
.
Those are the steps for setting up a HA Fabric farm.
So lets write a test program, in Python, to see whom we talk to when we connect to the HA farm.
import mysql.connector
from mysql.connector import fabric
fabric_host = "localhost"
fabric_user = "admin"
fabric_password = "hidave"
fabric_group = "myfarm"
cnx = mysql.connector.connect(fabric = {'host' :
fabric_host,
'username' : fabric_user,
'report_errors' : True,
'password' : fabric_password
} ,
user='root', password='hidave', autocommit = True,
database='world')
cnx.set_property(group="myfarm")
cursor = cnx.cursor()
query = ("SHOW VARIABLEs LIKE 'server_id'")
cursor.execute(query)
for (server_id, value) in cursor:
print server_id, value
cursor.close()
cnx.close()
And when executed the program proudly returned:
python test01.py
server_id 30
Woo-hoo! The test talked to the server at 10.10.13.30! So we no have a three node MySQL Fabric Highly Available server farm.