In this post, we’ll look at scaling Percona XtraDB Cluster with ProxySQL in Docker Swarm.
In my previous post, I showed how to employ Percona XtraDB Cluster on multiple nodes in a Docker network.
The intention is to be able to start/stop nodes and increase/decrease the cluster size dynamically. This means that we should track running nodes, but also to have an easy way to connect to the cluster.
So there are two components we need: service discovery to register nodes and ProxySQL to handle incoming traffic.
The work with service discovery is already bundled with Percona XtraDB Cluster Docker images, and I have experimental images for ProxySQL https://hub.docker.com/r/perconalab/proxysql/.
For multi-node management, we also need some orchestration tool, and a good start is Docker Swarm. Docker Swarm is simple and only provides basic functionality, but it works for a good start. (For more complicated setups, consider Kubernetes.)
I assume you have Docker Swarm running, but if not here is some good material on how to get it rolling. You also need to have service discovery running (see http://chunqi.li/2015/11/09/docker-multi-host-networking/ and my previous post).
To start a cluster with ProxySQL, we need a docker-compose
definition file docker-compose.yml
.:
version: '2' services: proxy: image: perconalab/proxysql networks: - front - Theistareykjarbunga ports: - "3306:3306" - "6032:6032" env_file: .env percona-xtradb-cluster: image: percona/percona-xtradb-cluster:5.6 networks: - Theistareykjarbunga ports: - "3306" env_file: .env networks: Theistareykjarbunga: driver: overlay front: driver: overlay
For convenience, both proxy
and
percona-xtradb-cluster
share the same environment
files (.env):
MYSQL_ROOT_PASSWORD=secret DISCOVERY_SERVICE=10.20.2.4:2379 CLUSTER_NAME=cluster15 MYSQL_PROXY_USER=proxyuser MYSQL_PROXY_PASSWORD=s3cret
You can also get both files from https://github.com/percona/percona-docker/tree/master/pxc-56/swarm.
To start both the cluster node and proxy:
docker-compose up -d
We can start as many Percona XtraDB Cluster nodes as we want:
docker-compose scale percona-xtradb-cluster=5
The command above will make sure that five nodes are running.
We can check it with docker ps:
docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 725f5f2699cc percona/percona-xtradb-cluster:5.6 "/entrypoint.sh " 34 minutes ago Up 38 minutes 4567-4568/tcp, 10.20.2.66:10284->3306/tcp smblade04/swarm_percona-xtradb-cluster_5 1c85ea1367e8 percona/percona-xtradb-cluster:5.6 "/entrypoint.sh " 34 minutes ago Up 38 minutes 4567-4568/tcp, 10.20.2.66:10285->3306/tcp smblade04/swarm_percona-xtradb-cluster_2 df87e9c1342e percona/percona-xtradb-cluster:5.6 "/entrypoint.sh " 34 minutes ago Up 38 minutes 4567-4568/tcp, 10.20.2.66:10283->3306/tcp smblade04/swarm_percona-xtradb-cluster_4 cbb82f7a9789 perconalab/proxysql "/entrypoint.sh " 36 minutes ago Up 40 minutes 10.20.2.66:3306->3306/tcp, 10.20.2.66:6032->6032/tcp smblade04/swarm_proxy_1 59e049fe22a9 percona/percona-xtradb-cluster:5.6 "/entrypoint.sh " 36 minutes ago Up 40 minutes 4567-4568/tcp, 10.20.2.66:10282->3306/tcp smblade04/swarm_percona-xtradb-cluster_1 0921a2611c3c percona/percona-xtradb-cluster:5.6 "/entrypoint.sh " 37 minutes ago Up 42 minutes 4567-4568/tcp, 10.20.2.5:32774->3306/tcp centos/swarm_percona-xtradb-cluster_3
We can see that Docker schedules containers on two different
nodes, the Proxy SQL container is
smblade04/swarm_proxy_1
, and the connection
point is 10.20.2.66:6032
.
To register Percona XtraDB Cluster in ProxySQL we can just execute the following:
docker exec -it smblade04/swarm_proxy_1 add_cluster_nodes.sh
The script will connect to the service discovery DISCOVERY_SERVICE (defined in .env file) and register nodes in ProxySQL.
To check that they are all running:
mysql -h10.20.2.66 -P6032 -uadmin -padmin MySQL [(none)]> select * from stats.stats_mysql_connection_pool; +-----------+-----------+----------+--------+----------+----------+--------+---------+---------+-----------------+-----------------+------------+ | hostgroup | srv_host | srv_port | status | ConnUsed | ConnFree | ConnOK | ConnERR | Queries | Bytes_data_sent | Bytes_data_recv | Latency_ms | +-----------+-----------+----------+--------+----------+----------+--------+---------+---------+-----------------+-----------------+------------+ | 0 | 10.0.14.2 | 3306 | ONLINE | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 212 | | 0 | 10.0.14.4 | 3306 | ONLINE | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 155 | | 0 | 10.0.14.5 | 3306 | ONLINE | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 136 | | 0 | 10.0.14.6 | 3306 | ONLINE | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 123 | | 0 | 10.0.14.7 | 3306 | ONLINE | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 287 | +-----------+-----------+----------+--------+----------+----------+--------+---------+---------+-----------------+-----------------+------------+
We can connect to a cluster using a ProxySQL endpoint:
mysql -h10.20.2.66 -uproxyuser -psecret mysql -h10.20.2.66 -P3306 -uproxyuser -ps3cret -e "SELECT @@hostname" +--------------+ | @@hostname | +--------------+ | 59e049fe22a9 | +--------------+ mysql -h10.20.2.66 -P3306 -uproxyuser -ps3cret -e "SELECT @@hostname" +--------------+ | @@hostname | +--------------+ | 725f5f2699cc | +--------------+
We can see that we connect to a different node every time.
Now if we want to get crazy and make sure we have ten Percona XtraDB Cluster nodes running, we can execute the following:
docker-compose scale percona-xtradb-cluster=10 Creating and starting swarm_percona-xtradb-cluster_6 ... Creating and starting swarm_percona-xtradb-cluster_7 ... Creating and starting swarm_percona-xtradb-cluster_8 ... Creating and starting swarm_percona-xtradb-cluster_9 ... Creating and starting swarm_percona-xtradb-cluster_10 ...
And Docker Swarm will make sure ten nodes are running.
I hope this demonstrates that you can easily start playing with multi-nodes using Percona XtraDB Cluster. In the next post, I will show how to use Percona XtraDB Cluster with Kubernetes.