Recently want to install Percona XtraDB Cluster + HAProxy + KeepAlived on CentOS 7, but could not find any all-in-one guide. So decided to write down all steps necessary for getting started and running HA solution using Open Source projects. Nowadays high availability is one of the main concerns faced by big and small companies.Minimum wasting of time (downtime per year), sustainably working of infrastructure and etc. is the main mission for all companies that they are trying to achieve.There are different approaches to this mission. One of them as large companies did, to buy expensive software and support from vendors. But small companies could not go through these steps. The true power of Open Source comes up at these moments.You can build your own High Availability solution using free but yet powerful Open Source projects. In this article we will show you how to achieve Database Clustering and Load Balancing. We will use:
-
Percona XtraDB Cluster – Percona XtraDB Cluster is High Availability and Scalability solution for MySQL Users. What is PXC?
-
HAproxy – HAProxy is a free, very fast and reliable solution offering high availability, load balancing, and proxying for TCP and HTTP-based applications See for further information HAproxy official site
-
KeepAlived – Keepalived is a routing software written in C. The main goal of this project is to provide simple and robust facilities for loadbalancing and high-availability to Linux system and Linux based infrastructures See for further information KeepAlived official site
General Architecture
Before diving in the step-by-step guide let’s to see what we want to accomplish at the end of tutorial:
As diagram shows there must be at least 3 CentOS 6.5/7 instance for PXC Galera Cluster setup. Total 3 CentOS 7 instance with all static local IPs and only 1 public IP for KeepAlived. So another advantage of this architecture that it will reduce public ip usage to 1.
{: .note } >
For this tutorial we will not use public IP as it is an test environment, all CentOSs will use local ips.
Prerequisites
We need 3 instances of CentOS 7 with minimal installation. Also you should give a static ips to all servers. Here is sample IPs that you may give to your CentOS instances for testing this tutorial:
- Node1 (1 GB RAM, 1 CPU VM) – 192.168.1.99 – (HAproxy1)
- Node2 (1 GB RAM, 1 CPU VM) – 192.168.1.88 – (HAproxy2)
- Node3 (1 GB RAM, 1 CPU VM) – 192.168.1.77 – (HAproxy3)
Also keep in mind that we need another IP for KeepAlived that will act as Virtual IP for HAproxy intances.
Installing and Configuring PXC
Installing and Running PXC:
The easy way to install is through official yum repo. You should activate Percona repo on all 3 Nodes:
-- CentOS 7
yum install http://www.percona.com/downloads/percona-release/redhat/0.1-3/percona-release-0.1-3.noarch.rpm
Save all 3 files on all 3 Nodes. Then run:
[root@node1, 2, 3 ~]# yum install Percona-XtraDB-Cluster-56
All needed packages and dependencies will be resolved automatically and at this point you should be able to start newly installed PXCs:
-- CentOS 7
[root@node1, 2, 3 ~]# systemctl start mysql.service
[root@node1, 2, 3 ~]# mysql_secure_installation
At this point, we have 3 running Percona Galera Cluster instances.
Installing Several other needed packages:
We need several other packages from epel repo, such as rsync for PXC SST See Documentation That’s why we must activate epel repo on all 3 Nodes:
-- CentOS 7
[root@node1, 2, 3 ~]# yum install socat rsync
Create binary log folder outside MySQL’s datadir
[root@node1, 2, 3 ~]# mkdir /var/lib/mysql-binlog
[root@node1, 2, 3 ~]# chown mysql:mysql /var/lib/mysql-binlog
-- SElinux
[root@centos7-node1 ~]# yum install policycoreutils-python
[root@centos7-node1 ~]# semanage fcontext -a -t mysqld_db_t /var/lib/mysql-binlog
[root@centos7-node1 ~]# restorecon -v /var/lib/mysql-binlog
restorecon reset /var/lib/mysql-binlog context unconfined_u:object_r:var_lib_t:s0->unconfined_u:object_r:mysqld_db_t:s0
server.cnf configuration file for PXC nodes:
Now we must edit all 3 nodes server.cnf (default location is ‘/etc/my.cnf.d/server.cnf’) file to reflect our needs. Following content is the same on all 3 nodes:
[mysqld]
# General #
datadir = /var/lib/mysql
socket = /var/lib/mysql/mysql.sock
default_storage_engine = InnoDB
lower_case_table_names = 1
# MyISAM #
key_buffer_size = 8M
myisam_recover = FORCE,BACKUP
# SAFETY #
max_allowed_packet = 16M
max_connect_errors = 1000000
skip_name_resolve
sysdate_is_now = 1
innodb = FORCE
thread_stack = 262144
back_log = 2048
performance_schema = OFF
skip_show_database
# Binary Logging #
log_bin = /var/lib/mysql-binlog/mysql-bin
log_bin_index = /var/lib/mysql-binlog/mysql-bin
expire_logs_days = 14
sync_binlog = 1
binlog_format = row
# CACHES AND LIMITS #
tmp_table_size = 16M
max_heap_table_size = 16M
query_cache_type = 0
query_cache_size = 0
query_cache_limit = 0
max_connections = 10000
thread_cache_size = 500
open-files-limit = 65535
table_definition_cache = 7000
table_open_cache = 7000
# InnoDB Related #
innodb_log_files_in_group = 2
innodb_autoinc_lock_mode =2
#innodb_locks_unsafe_for_binlog =1
innodb_log_file_size =100M
innodb_file_per_table
innodb_flush_log_at_trx_commit =2
innodb_buffer_pool_size =150M
And now we provide per Node settings for configuration file:
**Node1**
#
# * Galera-related settings
#
[galera]
wsrep_provider=/usr/lib64/galera3/libgalera_smm.so
wsrep_cluster_address="gcomm://192.168.1.99,192.168.1.88,192.168.1.77"
wsrep_cluster_name='TestingCluster'
wsrep_node_address='192.168.1.99'
wsrep_node_name='node1'
wsrep_sst_method=rsync
wsrep_sst_auth=sstuser:12345
bind-address=0.0.0.0
**Node2**
#
# * Galera-related settings
#
[galera]
wsrep_provider=/usr/lib64/galera3/libgalera_smm.so
wsrep_cluster_address="gcomm://192.168.1.99,192.168.1.88,192.168.1.77"
wsrep_cluster_name='TestingCluster'
wsrep_node_address='192.168.1.88'
wsrep_node_name='node1'
wsrep_sst_method=rsync
wsrep_sst_auth=sstuser:12345
bind-address=0.0.0.0
**Node3**
#
# * Galera-related settings
#
[galera]
wsrep_provider=/usr/lib64/galera3/libgalera_smm.so
wsrep_cluster_address="gcomm://192.168.1.99,192.168.1.88,192.168.1.77"
wsrep_cluster_name='TestingCluster'
wsrep_node_address='192.168.1.77'
wsrep_node_name='node1'
wsrep_sst_method=rsync
wsrep_sst_auth=sstuser:12345
bind-address=0.0.0.0
Save file on all 3 nodes and exit file editing.
Afer editing configuration files. On all 3 Nodes you must create wsrep_sst_auth user as mentioned in server.cnf file its name is ‘sstuser’
[root@node1, 2, 3 ~]# mysql -u root -p
mysql> create user 'sstuser'@'%' identified by '12345';
Query OK, 0 rows affected (0.01 sec)
mysql> grant all on *.* to 'sstuser'@'%';
Query OK, 0 rows affected (0.00 sec)
Set to permissive mode on all 3 Nodes. As SElinux prevents Galera to start:
-- CentOS 7
[root@node1, 2, 3 ~]# setenforce 0
‘/etc/hosts’ file contents for PXC Nodes:
**Node1**
[root@node1 ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
127.0.0.1 node1.localdomain node1
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.99 localhost localhost.localdomain node1.localdomain node1
192.168.1.88 node2
192.168.1.77 node3
**Node2**
[root@node2 ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
127.0.0.1 node2.localdomain node2
192.168.1.88 localhost localhost.localdomain node2.localdomain node2
192.168.1.99 node1
192.168.1.77 node3
**Node3**
[root@node3 ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
127.0.0.1 node3.localdomain node3
192.168.1.77 localhost localhost.localdomain node3
192.168.1.99 node1
192.168.1.88 node2
Iptables settings related to PXC nodes:
-- CentOS 7
** Node1 **
firewall-cmd --add-rich-rule='rule family="ipv4" source address="192.168.1.88" port port="4567" protocol="tcp" accept' --permanent
firewall-cmd --add-rich-rule='rule family="ipv4" source address="192.168.1.88" port port="3306" protocol="tcp" accept' --permanent
firewall-cmd --add-rich-rule='rule family="ipv4" source address="192.168.1.77" port port="4567" protocol="tcp" accept' --permanent
firewall-cmd --add-rich-rule='rule family="ipv4" source address="192.168.1.88" port port="4444" protocol="tcp" accept' --permanent
firewall-cmd --add-rich-rule='rule family="ipv4" source address="192.168.1.77" port port="3306" protocol="tcp" accept' --permanent
firewall-cmd --add-rich-rule='rule family="ipv4" source address="192.168.1.77" port port="4444" protocol="tcp" accept' --permanent
firewall-cmd --add-rich-rule='rule family="ipv4" source address="192.168.1.77" port port="4568" protocol="tcp" accept' --permanent
firewall-cmd --add-rich-rule='rule family="ipv4" source address="192.168.1.88" port port="4568" protocol="tcp" accept' --permanent
firewall-cmd --add-rich-rule='rule family="ipv4" source address="192.168.1.88" port port="9200" protocol="tcp" accept' --permanent
firewall-cmd --add-rich-rule='rule family="ipv4" source address="192.168.1.77" port port="9200" protocol="tcp" accept' --permanent
firewall-cmd --reload
** Node 2 **
firewall-cmd --add-rich-rule='rule family="ipv4" source address="192.168.1.99" port port="4567" protocol="tcp" accept' --permanent
firewall-cmd --add-rich-rule='rule family="ipv4" source address="192.168.1.99" port port="3306" protocol="tcp" accept' --permanent
firewall-cmd --add-rich-rule='rule family="ipv4" source address="192.168.1.77" port port="4567" protocol="tcp" accept' --permanent
firewall-cmd --add-rich-rule='rule family="ipv4" source address="192.168.1.99" port port="4444" protocol="tcp" accept' --permanent
firewall-cmd --add-rich-rule='rule family="ipv4" source address="192.168.1.77" port port="3306" protocol="tcp" accept' --permanent
firewall-cmd --add-rich-rule='rule family="ipv4" source address="192.168.1.77" port port="4444" protocol="tcp" accept' --permanent
firewall-cmd --add-rich-rule='rule family="ipv4" source address="192.168.1.77" port port="4568" protocol="tcp" accept' --permanent
firewall-cmd --add-rich-rule='rule family="ipv4" source address="192.168.1.99" port port="4568" protocol="tcp" accept' --permanent
firewall-cmd --add-rich-rule='rule family="ipv4" source address="192.168.1.99" port port="9200" protocol="tcp" accept' --permanent
firewall-cmd --add-rich-rule='rule family="ipv4" source address="192.168.1.77" port port="9200" protocol="tcp" accept' --permanent
firewall-cmd --reload
** Node 3 **
firewall-cmd --add-rich-rule='rule family="ipv4" source address="192.168.1.88" port port="4567" protocol="tcp" accept' --permanent
firewall-cmd --add-rich-rule='rule family="ipv4" source address="192.168.1.88" port port="3306" protocol="tcp" accept' --permanent
firewall-cmd --add-rich-rule='rule family="ipv4" source address="192.168.1.99" port port="4567" protocol="tcp" accept' --permanent
firewall-cmd --add-rich-rule='rule family="ipv4" source address="192.168.1.88" port port="4444" protocol="tcp" accept' --permanent
firewall-cmd --add-rich-rule='rule family="ipv4" source address="192.168.1.99" port port="3306" protocol="tcp" accept' --permanent
firewall-cmd --add-rich-rule='rule family="ipv4" source address="192.168.1.99" port port="4444" protocol="tcp" accept' --permanent
firewall-cmd --add-rich-rule='rule family="ipv4" source address="192.168.1.99" port port="4568" protocol="tcp" accept' --permanent
firewall-cmd --add-rich-rule='rule family="ipv4" source address="192.168.1.88" port port="4568" protocol="tcp" accept' --permanent
firewall-cmd --add-rich-rule='rule family="ipv4" source address="192.168.1.88" port port="9200" protocol="tcp" accept' --permanent
firewall-cmd --add-rich-rule='rule family="ipv4" source address="192.168.1.99" port port="9200" protocol="tcp" accept' --permanent
firewall-cmd --reload
Bootstrapping(starting) Cluster
As we have 3 Nodes, one of them must be in some words, the “Head” node, at first time we must choose one of Nodes as start point. For us this is Node1. Stop Node1 and run following command:
-- CentOS 7
[root@node1 ~]# systemctl stop mysql.service
[root@node1 ~]# systemctl start mysql@bootstrap.service
Bootstrapping the clusterStarting MySQL.. SUCCESS!
(There is no bootstrap command for CentOS 7 for MariaDB cluster - already reported as: [MDEV-7752](https://mariadb.atlassian.net/browse/MDEV-7752))
Then you must start other 2 nodes as usual:
-- CentOS 7
[root@node2 ~]# systemctl start mysql.service
[root@node3 ~]# systemctl start mysql.service
After success messages login to PXC Node1 and run:
PXC [(none)]> show status like 'wsrep%';
The important part of output is:
| wsrep_cluster_conf_id | 5 |
| wsrep_cluster_size | 3 |
| wsrep_connected | ON |
| wsrep_ready | ON |
| wsrep_cluster_status | Primary |
As you see the cluster size is 3 and it means all 3 Nodes connected and working. Also you can see from output that Cluster is ready for accepting connection. Let’s create database from Node3 and test its creation from other Nodes:
[root@node3 ~]# mysql -u root -p
PXC [(none)]> create database pxc_test;
Query OK, 1 row affected (0.10 sec)
[root@node2 ~]# mysql -u root -p
PXC [(none)]> show databases like 'pxc%';
+-----------------+
| Database (lin%) |
+-----------------+
| pxc_test |
+-----------------+
1 row in set (0.00 sec)
[root@node1 ~]# mysql -u root -p
PXC [(none)]> show databases like 'pxc%';
+-----------------+
| Database (lin%) |
+-----------------+
| pxc_test |
+-----------------+
1 row in set (0.00 sec)
Enabling SElinux :
Disabling SElinux permanently or running in permissive mode for all time is dangerous related to security. As we have set SElinux into permissive mode, it will not prevent any Galera actions, instead it has already logged all related information into audit.log file. Using this file we should create rules and reenable SElinux. On all 3 nodes:
-- CentOS 7
[root@node1, 2, 3 ~]# yum install policycoreutils-python
[root@node1, 2, 3 ~]# grep mysql /var/log/audit/audit.log | audit2allow -M galera
[root@node1, 2, 3 ~]# semodule -i galera.pp
[root@node1, 2, 3 ~]# setenforce 1
Configuring Clustercheck script for cluster health check
This script will be installed with PXC(/usr/bin/clustercheck). Clustercheck is simple script o make a proxy (i.e HAProxy) capable of monitoring Percona XtraDB Cluster nodes properly.
Before setup install xinetd in all 3 nodes:
-- CentOS 7
[root@node1, 2, 3 ~]# yum install xinetd
[root@node1, 2, 3 ~]# systemctl start xinetd.service
[root@node1, 2, 3 ~]# systemctl enable xinetd.service
After installing Xinetd and ensuring that it will start on system reboot automatically, we need configure mysqlchk file:
[root@node1, 2, 3 ~]# nano /etc/xinetd.d/mysqlchk
-- Add following lines to this file
service mysqlchk
{
disable = no
flags = REUSE
socket_type = stream
port = 9200
wait = no
user = nobody
server = /usr/bin/clustercheck
log_on_failure += USERID
only_from = 0.0.0.0/0
per_source = UNLIMITED
}
Then edit /etc/services file. Locate wap-wsp word in this file and comment out as follows, then add mysqlchk entry:
[root@node1, 2, 3 ~]# nano /etc/services
mysqlchk 9200/tcp # SERVICE for Checking PXC Cluster
#wap-wsp 9200/tcp # WAP connectionless session service
#wap-wsp 9200/udp # WAP connectionless session service
Now create clustercheck database user on ALL Nodes:
create user 'clustercheckuser'@'localhost' identified by 'clustercheckpassword!';
grant process on *.* to 'clustercheckuser'@'localhost';
Restart Xinetd on all Nodes:
-- CentOS 7
[root@node1, 2, 3 ~]# systemctl restart xinetd
And final step for this section is to check this new script:
[root@node1, 2, 3 ~]# clustercheck
HTTP/1.1 200 OK
Content-Type: text/plain
Connection: close
Content-Length: 40
Percona XtraDB Cluster Node is synced.
As you see everything is OK and it works.
Installing and Configuring HAproxy
As we have mentioned we have not got VMs dedicated to HAproxy. That’s why HAproxy instances will be in same machines as PXCs.
Install HAproxy:
[root@node1, 2, 3 ~]# yum install haproxy
[root@node1, 2, 3 ~]# systemctl enable haproxy.service
Fix HAproxy logging: By default HAproxy does not log anything, we must fix it by adding haproxy.conf file into /etc/rsyslog.d directory:
[root@node1, 2, 3 ~]# cd /etc/rsyslog.d/
-- Add following lines into file:
[root@node1, 2, 3 rsyslog.d]# nano haproxy.conf
$ModLoad imudp
$UDPServerRun 514
$template Haproxy,"%msg%\n"
local0.=info -/var/log/haproxy.log;Haproxy
local0.notice -/var/log/haproxy-status.log;Haproxy
local0.* ~
-- SElinux related command:
[root@node1, 2, 3 ~]# /sbin/restorecon -v -F /etc/rsyslog.d/haproxy.conf
[root@node1, 2, 3 ~]# systemctl restart rsyslog.service
Edit haproxy.cfg config file:
**HAproxy1, 2, 3**
[root@node1, 2, 3 ~]# cd /etc/haproxy/
[root@node1, 2, 3 ~]# mv haproxy.cfg haproxy.cfg.orig
[root@node1, 2, 3 ~]# nano haproxy.cfg
global
log 127.0.0.1 local0
log 127.0.0.1 local1 notice
maxconn 4096
user haproxy
group haproxy
nbproc 1
pidfile /var/run/haproxy.pid
defaults
log global
option tcplog
option dontlognull
retries 3
maxconn 4096
option redispatch
timeout connect 50000ms
timeout client 50000ms
timeout server 50000ms
listen PXC-writes
bind 0.0.0.0:3311
mode tcp
# option mysql-check user haproxy
option httpchk
server node1 192.168.1.99:3306 check port 9200
server node2 192.168.1.88:3306 check port 9200 backup
server node3 192.168.1.77:3306 check port 9200 backup
listen PXC-reads
bind 0.0.0.0:3312
mode tcp
balance leastconn
# option mysql-check user haproxy
option httpchk
server node1 192.168.1.99:3306 check port 9200
server node2 192.168.1.88:3306 check port 9200
server node3 192.168.1.77:3306 check port 9200
# HAProxy web ui
listen stats 0.0.0.0:9000
mode http
stats enable
stats uri /haproxy
stats realm HAProxy\ Statistics
stats auth haproxy:haproxy
stats admin if TRUE
-- SElinux related command:
[root@node1 ~]# /sbin/restorecon -v -F /etc/haproxy/haproxy.cfg
[root@node1 ~]# setsebool haproxy_connect_any on
[root@node1 ~]# systemctl start haproxy.service
Lets explore config file a bit more: The default port number 9000 is for WEB UI for HAproxy monitoring. Another thing to remember that when using PXC Galera Cluster with SST options mysqldump and rsync (default) it will lock each other nodes from getting updates while DDL, DML statements executing. To avoid such situation and not to stuck with Deadlocks, we decide to separate Write operations. In other words, Write operations (e.g insert, update, delete etc.) will go only to Node1. So on the application side, you should send write operations to port 3310 as we put in haproxy.cfg file, and for read operations to port number 3311. There is an available non-locking SST option XtraBackup (the famous hot online backup tool for MySQL), but it is the subject of another topic.
And ofcourse we should check our work:
Check for listening ports:
[root@node1 ~]# netstat -ntpl | grep haproxy
tcp 0 0 0.0.0.0:9000 0.0.0.0:* LISTEN 11902/haproxy
tcp 0 0 0.0.0.0:3310 0.0.0.0:* LISTEN 11902/haproxy
tcp 0 0 0.0.0.0:3311 0.0.0.0:* LISTEN 11902/haproxy
[root@node2 haproxy]# netstat -ntpl | grep haproxy
tcp 0 0 0.0.0.0:9000 0.0.0.0:* LISTEN 12846/haproxy
tcp 0 0 0.0.0.0:3310 0.0.0.0:* LISTEN 12846/haproxy
tcp 0 0 0.0.0.0:3311 0.0.0.0:* LISTEN 12846/haproxy
As we have mentioned port number 9000 for WEB Ui related to HAproxy statistics. So lets connect to IP of on of the HAproxy servers. The connection URL for me is: http://192.168.1.88:9000/haproxy_stats. In Very first connection there will be an pop-screen for login and password. These information is stored in /etc/haproxy/haproxy.cfg file stats auth haproxy:haproxy. So the Login is: haproxy and Password is: haproxy too.
You will see the statistics page with correctly working Cluster Nodes:
Now lets check MySQL connectivity over HAproxy. For this purpose create sample database user on PXC Cluster nodes. Our cluster is ready and that’s why it is sufficient to run this command one of the nodes and other will pick-up automatically
create user 'all'@'%' identified by '12345';
grant all on *.* to 'all'@'%';
Before doing, open these ports:
[root@node1, 2, 3 ~]# firewall-cmd --add-port=3311/tcp --permanent
success
[root@node1, 2, 3 ~]# firewall-cmd --add-port=3312/tcp --permanent
success
[root@node1, 2, 3 ~]# firewall-cmd --reload
success
Our testing result should be something like:
-- make connection through HAproxy2(node2) server to port 3311 (our write node)
sh@sh--work:~$ mysql -u all -p --host=192.168.1.88 --port=3311 -e "select @@version, @@hostname";
Enter password:
+--------------------+---------------+
| @@version | @@hostname |
+--------------------+---------------+
| 5.6.24-72.2-56-log | node1 |
+--------------------+---------------+
-- Second to port 3312 (one of our read nodes)
sh@sh--work:~$ mysql -u all -p --host=192.168.1.88 --port=3312 -e "select @@version, @@hostname";
Enter password:
+--------------------+---------------+
| @@version | @@hostname |
+--------------------+---------------+
| 5.6.24-72.2-56-log | node1 |
+--------------------+---------------+
sh@sh--work:~$ mysql -u all -p --host=192.168.1.88 --port=3312 -e "select @@version, @@hostname";
Enter password:
+--------------------+---------------+
| @@version | @@hostname |
+--------------------+---------------+
| 5.6.24-72.2-56-log | node2 |
+--------------------+---------------+
sh@sh--work:~$ mysql -u all -p --host=192.168.1.88 --port=3312 -e "select @@version, @@hostname";
Enter password:
+--------------------+---------------+
| @@version | @@hostname |
+--------------------+---------------+
| 5.6.24-72.2-56-log | node3 |
+--------------------+---------------+
Installing and Configuring KeepAlived
The Final step in our environment is installing and configuring KeepAlived , acting as Virtual IP and switch-over between HAproxy instances. The idea is, when one of HAproxy servers fails, another one must do same work without interrupting all architecture. The new IP address for us is 192.168.1.199 – which is Virtual IP for our HAproxy servers. We will connect to MySQL(PXC) through this IP and KeepAlived will decide to which HAproxy instance it must send this connection. Lets test:
-- Pay Attention to host IP address
sh@sh--work:~$ mysql -u all -p --host=192.168.1.199 --port=3311 -e "select @@version, @@hostname";
Enter password:
+--------------------+---------------+
| @@version | @@hostname |
+--------------------+---------------+
| 5.6.24-72.2-56-log | node1 |
+--------------------+---------------+
sh@sh--work:~$ mysql -u all -p --host=192.168.1.199 --port=3312 -e "select @@version, @@hostname";
Enter password:
+--------------------+---------------+
| @@version | @@hostname |
+--------------------+---------------+
| 5.6.24-72.2-56-log | node2 |
+--------------------+---------------+
We send connection to KeepAlived virtual IP and it passes to one of HAproxy servers. Enough theory lets achieve our goal.
Installing/Configuring
**HAproxy1**
[root@node1, 2, 3 ~]# yum install keepalived
[root@node1, 2, 3 ~]# systemctl enable keepalived.service
[root@node1, 2, 3 ~]# cd /etc/keepalived/
[root@node1, 2, 3 ~]# mv keepalived.conf keepalived.conf.orig
[root@node1, 2, 3 ~]# nano keepalived.conf
-- Add following lines to file
vrrp_script chk_haproxy {
script "killall -0 haproxy" # verify the pid is exist or not
interval 2 # check every 2 seconds
weight 2 # add 2 points of prio if OK
}
vrrp_instance VI_1 {
interface eth0 # interface to monitor
state MASTER
virtual_router_id 51 # Assign one ID for this route
priority 101 # 101 on master, 100 on backup1, 99 on backup2
authentication {
auth_type PASS
auth_pass 12345
}
virtual_ipaddress {
192.168.1.199/24 # the virtual IP
}
track_script {
chk_haproxy
}
}
-- Save and exit file editing.
[root@node1, 2, 3 ~]# /sbin/restorecon -v -F /etc/keepalived/keepalived.conf
Starting KeepAlived
[root@node1, 2, 3 ~]# systemctl start keepalived.service
Testing MySQL connection with KeepAlived
sh@sh--work:~$ mysql -u all -p --host=192.168.1.199 --port=3311 -e "select @@version, @@hostname";
Enter password:
+---------------------------+------------+
| @@version | @@hostname |
+---------------------------+------------+
| 10.0.15-PXC-wsrep-log | node1 |
+---------------------------+------------+
sh@sh--work:~$ mysql -u all -p --host=192.168.1.199 --port=3312 -e "select @@version, @@hostname";
Enter password:
+---------------------------+------------+
| @@version | @@hostname |
+---------------------------+------------+
| 10.0.15-PXC-wsrep-log | node2 |
+---------------------------+------------+
Conclusion
In this tutorial you learn how to install and configure PXC on CentOS 7 and also how to load balance this cluster using wellknown HAproxy. As bonus you learn to install/configure KeepAlived and access database over Virtual IP which is a next level of redundancy in High Availability approach. If you have any suggesstions or errors, please do not hesitate to contact. Thank you for reading.
The post Comprehensive guide to installing PXC on CentOS 7 appeared first on Azerbaijan MySQL UG.