Installation and Configuration Sun Cluster Data Service For MySQL

MySQL data service running on a clustered-server model provide orderly startup, shutdown, fault monitoring, and high availability(failover) mechanism compared to the single server model. It can be free donwloaded, and Sun also released Open HA cluster, derived from the Sun Cluster 3.2 agents including HA MySQL data service. The open HA cluster also provide build tools necessary to develop new features, build and use the code. In addition, Sun cluster 3.2 Data service for MySQL has the advantages including:

  • It is easy to install and configure with straightforward GUI and command line interfaces.

  • Expanded support for Solaris Containers(Solaris zones)

  • Expanded support for SMF

  • ZFS is fully supported as a local highly available fileystem

The following is the OS and HW setup for two-node cluster:

  • Solaris 10 11/06 installed on the two nodes

  • Each node has two network interfaces to be used as point-to-point private interconnects, and one network interface connect to the public network interface.

  • Two storage device(SE6120) connected to the two nodes


Figure 1:MySQL Replication Clustered Model


In the above MySQL clustered-server configuration, logical host name is set as the failover IP address within the same subnet. When the MySQL resource online, the failover IP is plumbed on the node where the MySQL resource is running. If a failover happen, the IP address moves along with the MySQL resource to the failover node. In this example, the configuration is:



Name

Interface

IP address

Logical Hostname

10.6.241.210



Node1

Sunfire-x64-241-02

bge0

10.6.241.208

Node2

Sunfire-x64-241-03

bge0

10.6.241.209

For the private network interfaces as cluster interconnection, Sun cluster installation will configure the network assigning the private network addresses. Note: Do NOT configure the private network interfaces before installing Sun cluster, otherwise, you will fail to install Sun cluster and get the error message as:

Adapter “ce0” is already in use as a public network adapter

After the HW and network is setup as the above, the general task flow of MySQL data service installation and configuration is:



Figure 2:MySQL Data Service Installation & Configuration Flow chart

  • Step 1: Plan Installation

  1. Sun cluster requires to set aside a special file system named as: /globaldevices on one of the local disks for use in managing global devices on the two nodes.This file system is later mounted as a cluster file system as:

    /global/.devices/node@nodeid

  2. (optional) Setup cluster environment

    PATH=/usr/bin:/usr/cluster/bin:/usr/local/mysql/bin:/usr/sbin:/usr/ccs/bin:/usr/cluster/man:$PATH

  3. On both nodes, update the /etc/inet/ipnodes file with all public hostnames for the cluster

Sunfire-x64-241-02# vi /etc/inet/ipnodes
        127.0.0.1       localhost
        10.6.241.208    Sunfire-x64-241-02     loghost
        10.6.241.209    Sunfire-x64-241-03
Sunfire-x64-241-03# vi /etc/inet/ipnodes
        127.0.0.1       localhost
        10.6.241.209    Sunfire-x64-241-03     loghost
        10.6.241.208    Sunfire-x64-241-02
  1. On both nodes, add the following entry in the /etc/system file

set ce:ce_taskq_disable=1
exclude:lofs

The first entry supports ce adapters for the private interconnect, and the second entry disable the loopback file system(LOFS)

5. local-mac-address? variable must set as true for Ethenet adapters. On Solaris x86, use the command: #eeprom local-mac-address?=true; On Solaris  SPARC, change the local-mac-address? Variable to be ture from OBP OK> prompt
  • Step 2: Sun Cluster and MySQL data service Installation

Solaris cluster(sun cluster, Sun cluster Geographic Edition and Sun cluster agents) 3.2 is downloadable at:

http://www.sun.com/download/products.xml?id=4581ab9e

On both nodes, run the installer command to bring up the installer GUI . The installation of Sun cluster and Sun cluster for HA MySQL is straightforward to follow the instructions on the screen. Choose Sun cluster Core 3.2 core software, Sun cluster HA for MySQL to install, and choose Configure later before the installation.

  • Step 3: Configure Sun Cluster Software

Perform this procedure from one node of the cluster to configure Sun Cluster software on all nodes of the cluster:

1. # /usr/cluster/bin/scinstall

2. From the main menu, pick the “Option 1” to “Create a new cluster or add a cluster node”

3. From the new cluster and Cluster Node menu, pick the “Option 1” to “Create a new cluster”

4. From the Typical or Custom Mode, pick the “Option 1” for typical mode

5. From the Cluster Nodes menu, type the node name: “Sunfire-x64-241-02”, and “Sunfire-x64-241-03”

6. Provide the first and the second private adapter name: ce0, bge1

7. Type “no” for “ disable automatic quorum device selection”

8. Type “yes” for “ create the new cluster”

9. Type “no” for “ Interrupt cluster creation for sccheck errors”

At this point, the scinstall utility configures all cluster nodes and reboot the cluster. The cluster is established when all nodes have successfully booted into the cluster. Sun Cluster installation output is logged in /var/cluster/logs/install/scinstall.log.N file.

10. Verify cluster setup:

Sunfire-x64-241-03# clquorum list
Sunfire-x64-241-03
Sunfire-x64-241-02
Sunfire-x64-241-03# clnode status

Cluster Nodes ===

--- Node Status ---

Node Name                                       Status
---------                                       ------
Sunfire-x64-241-03                                     Online
Sunfire-x64-241-02                                     Online

When the scinstall utility finishes, The cluster is now ready to configure the components you will use to support highly available MySQL, including device groups, and file systems.

  • Step 4: Configure Volume Manager and File System

In a shared disk set configuration in this example, two hosts are physically connected to the same set of disks. When one node fails, another node has exclusive access to the disks. Each node can control a shared disk set, but only one host can control it at a time.

  1. From one node: Sunfire-x64-241-02, create one disk set for MySQL data service, so that Sunfire-x64-241-02 is make as the primary node:

    Sunfire-x64-241-02# metaset -s mysqlset -a -h Sunfire-x64-241-02 Sunfire-x64-241-03

  2. Verify that the configuration of the disk sets is correct and visible to both nodes.

  Sunfire-x64-241-02# metaset

  Set name = mysqlset, Set number = 1
   Host                Owner
   Sunfire-x64-241-02          Yes

Sunfire-x64-241-03

From the primary node: Sunfire-x64-241-02, list the DID mappings:

Sunfire-x64-241-02# cldevice show |grep Device
  === DID Device Instances ===                   
...

DID Device Name:                                /dev/did/rdsk/d5
  Full Device Path:                                Sunfire-x64-241-02:/dev/rdsk/c3t60003BACCC90200046264D58000A22E3d0
  Full Device Path:                                Sunfire-x64-241-03:/dev/rdsk/c5t60003BACCC90200046264D58000A22E3d0

...

Add /dev/did/rdsk/d5 to the MySQL disk setup

Sunfire-x64-241-02# metaset -s mysqlset -a /dev/did/rdsk/d5

  1. Verify that the configuration of the disk set is correct

Sunfire-x64-241-02# metaset -s mysqlset

  1. On both nodes, create the /etc/lvm/md.tab file with the following entries

 mysqlset1/d0 -m mysqlset1/d10
        mysqlset1/d10 1 1 /dev/did/rdsk/d5s0
 mysqlset1/d1 -p mysqlset1/d0 50G
mysqlset1/d2 -p mysqlset1/d0 50G

mysqlset1/d3 -p mysqlset1/d0 50G

7.  From Sunfire-x64-241-02, take ownership for the mysql disk set and activate the volume
Sunfire-x64-241-02# cldevicegroup switch -n Sunfire-x64-241-02 mysqlset
 Sunfire-x64-241-02# metainit -s mysqlset -a
8.  Verify the status of the volume for the disk setup
 Sunfire-x64-241-02# metastat
  ...
  Status: Okay
  ...
9. Create the cluster file system for use Sun Cluster HA for MySQL. From Sunfire-x64-241-02,create te file systems:
  Sunfire-x64-241-02# newfs /dev/md/mysqlset1/rdsk/d1
  Sunfire-x64-241-02# newfs /dev/md/mysqlset1/rdsk/d2
  Sunfire-x64-241-02# newfs /dev/md/mysqlset1/rdsk/d3

10. On both node, create the mount-point directory for the file systems

# mkdir -p /global/mysql       - mysql master and slave servers
# mkdir -p /global/mysql-data1 – data directory for mysql master server
# mkdir -p /global/mysql-data2 – data directory for mysql slave server

11. On both node, add entries to the /etc/vfstab file for the above mount points

12. On both nodes, mount the file systems, and verify that the file systems are mounted
# mount
 /global/mysql on /dev/md/mysqlset/dsk/d1  read/write/setuid/devices/intr/largefiles/logging/noquota/global/xattr/nodfratime/onerror=panic/dev=1544001 on Tue Aug 14 17:12:45 2007

/global/mysql-data1 on /dev/md/mysqlset/dsk/d2 read/write/setuid/devices/intr/forcedirectio/largefiles/logging/noquota/global/xattr/nodfratime/onerror=panic/dev=1544002 on Tue Aug 14 17:12:50 2007

/global/mysql-data2 on /dev/md/mysqlset/dsk/d3 read/write/setuid/devices/intr/forcedirectio/largefiles/logging/noquota/global/xattr/nodfratime/onerror=panic/dev=1544003 on Tue Aug 14 17:12:56 2007
13. Verify the Sun cluster installation & configuration before registering and configuring Sun cluster HA for MySQL
 Sunfire-x64-241-02# cluster check
  • Step 5: Configure Sun Cluster Resource for MySQL

  1. Register SUNW.gds, SUNW.HAStoragePlus resource type

    Sunfire-x64-241-02# scrgadm -a -t SUNW.gds

    Sunfire-x64-241-02# scrgadm -a -t SUNW.HAStoragePlus

  2. Create MySQL resource group named MySQL-failover-resource-group

    Sunfire-x64-241-02# scrgadm -a -g MySQL-failover-resource-group

  3. Create the HAStoragePlus resource named MySQL-has-resource in the MySQL-failover-resource-group resource group for MySQl disk storage

    Sunfire-x64-241-02# scrgadm -a -j MySQL-has-resource -g MySQL-failover-resource-group -t SUNW.HAStoragePlus -x FilesystemMountPoints=/global/mysql-data1,/global/mysql-data2

  4. Create a logical hostname resource named MySQL-lh-resource

    Sunfire-x64-241-02# scrgadm -a -L -j MySQL-lh-resource -g MySQL-failover-resource-group -l 10.6.241.210

    To verify the logical hostname resource is online, you can run “ifconfig -a” to see if the virtual IP address is configured on the network interface.

  5. Enable the failover resource group including the MySQL disk storage and logical hostname resources.

    Sunfire-x64-241-02# scswitch -Z -g MySQL-failover-resource-group

  • Step 6: Install and Configure MySQL

  1. On both nodes, install MySQL under /usr/local/mysql, which is symbol link to /global/mysql

    Sunfire-x64-241-02# mysql_install_db –datadir=/global/mysql-data1

    Sunfire-x64-241-02# chown -R root .

    Sunfire-x64-241-02# chown -R mysql /global/mysql-data1

    Sunfire-x64-241-02# chgrp -R mysql .

  2. Copy the sample “my.cnf_sample_master” and “my.cnf_sample_slave” under “/opt/SUNWscmys/etc” to the MySQl data directory(mysql-data1, mysql-data2) of the MySQL master and slave machine.

  3. Modify the sample my.cnf file to point to the right directories for the data and log files. “bind-address” must be set with the logical hostname as “10.6.241.210” in this example. Please note: it need to set bind-address = ip number, because of the unfixed MySQL bug on Solaris amd64 OS.

  • Step 7: Modify MySQL Configuration Files

  1. Go to the directory /opt/SUNWscmys, add cluster's information in the mysql_config file.

 Sunfire-x64-241-02# vi mysql_config
 ...
MYSQL_USER=root
MYSQL_PASSWD=password
MYSQL_HOST=10.6.241.210    ->Logical hostname(IP)
FMUSER=fmuser
FMPASS=fmuser
MYSQL_SOCK=/tmp/10.6.241.212.sock
MYSQL_NIC_HOSTNAME="Sunfire-x64-241-02 Sunfire-x64-241-03"  ->Physical hostname
  1. Add cluster's information in the ha_mysql_config file

    Sunfire-x64-241-02# vi ha_mysql_config
    ...
    RS=MySQL-failover-resource-group
    RG=MySQL-failover-resource-group
    PORT=3306
    LH=10.6.241.210
    HAS_RS=mysql-has-resource
...
BASEDIR=/usr/local/mysql
DATADIR=/global/mysql-data1
MYSQLUSER=mysql
MYSQLHOST=Sunfire-x64-241-02
FMUSER=fmuser
FMPASS=fmuser
LOGDIR=/global/mysql-data1/logs
CHECK=YES
  • Step 8: Enable Sun cluster HA For MySQL

  1. Start MySQL server with “–skip-grant-table” option

    Sunfire-x64-241-02# mysqld_safe –defaults-file=/global/mysql-data1/my.cnf –datadir=/global/mysql-data1 –skip-grant-table –user=mysql &

  2. Change the password for root as “password”

        Sunfire-x64-241-02# mysql -S /tmp/10.6.241.210.sock -u root
    mysql>UPDATE mysql.user set Password=PASSWORD('password')
    -> where User='root';
    mysql>FLUSH PRIVILEGES;
  3. Shutdown and restart the MySQL servers without the “–skip-grant-table” option on the node where the resource group is online(check it with the “scstat -g” command)

  4. Prepare the Sun cluster specific test database.

    Sunfire-x64-241-02# cd /opt/SUNWscmys
    Sunfire-x64-241-02# ./mysql_register -f /opt/SUNWscmys/util/mysql_config
    sourcing /opt/SUNWscmys/util/mysql_config and create a working copy under /opt/SUNWscmys/util/mysql_config.work
MySQL version 5 detected on 5.10/SC3.2
Add faulmonitor user (fmuser) with password (fmuser) with Process-,Select-, Reload- and Shutdown-privileges to user table for mysql database for host Sunfire-x64-241-02
Add SUPER privilege for fmuser@Sunfire-x64-241-02
Add faulmonitor user (fmuser) with password (fmuser) with Process-,Select-, Reload- and Shutdown-privileges to user table for mysql database for host Sunfire-x64-241-03
Add SUPER privilege for fmuser@Sunfire-x64-241-03
Create test-database sc3_test_database
Grant all privileges to sc3_test_database for faultmonitor-user fmuser for host Sunfire-x64-241-02
Grant all privileges to sc3_test_database for faultmonitor-user fmuser for host Sunfire-x64-241-03
Flush all privileges
Mysql configuration for HA is done
  1. Shutdown the mysql server

    Sunfire-x64-241-02# mysqladmin -S /tmp/10.6.241.210.sock shutdown -p

  2. Register resource

     Sunfire-x64-241-02# cd /opt/SUNWscmys
     Sunfire-x64-241-02# ./ha_mysql_register -f /opt/SUNWscmys/util/ha_mysql_config
  3. Enable each MySQL resource

     scswitch -e -j MySQL-has-resource
     scswitch -e -j MySQL-lh-resource
  • Step 9: Verify Sun cluster HA for MySQL configuration

Once all the MySQL resources are created and configured, and online(check with the “scstat -g” command), you should go ahead to see if the MySQL database can successfully fail over to each node configured in the resource group(MySQL-failover-resource-group). This can be verified by running with “scswitch” to switch MySQL resource group to another node to fail the resouce group to:

#scswitch -z -g MySQL-failover-resource-group -h Sunfire-x64-241-03

If you can successfully migrate the database to each node in the cluster, you now have highly available MySQL database.

At this point, by following the above nine steps, I have completed the basics of deploying highly available MySQL database. To deploy the Sun Cluster MySQL data service to achieve the maximum availability into the production environment, I would highly recommend you read through the Sun cluster documents and MySQL data service guides and verify everything in a test environment in advance.