Planet MySQL Planet MySQL: Meta Deutsch Español Français Italiano 日本語 Русский Português 中文
RDS Migration from 5.5 to 5.6 with mysqldump
+2 Vote Up -0 Vote Down

Amazon recently announced support for 5.6, unfortunately, direct upgrade from lower versions is not yet supported. On a recent migration work – running mysqldump flat out would’ve meant 6+hrs of downtime. How did we cut it off to 1h45m? Simple, run dump per table and pipe it directly to the new 5.6 instance in parallel using Percona Server’s mysqldump utility to take advantage of –innodb-optimize-keys.

Here’s the base script we used – of course, YMMV and make sure to optimize the destination instance as well!

#!/bin/bash
# export-run.sh
# This is the wrapper script which builds up the list of tables to split into $parallel parts and calls export-tables.sh

parallel=6
dblist="db1 db2 db3"
smysql="mysql -hsource-55.us-east-1.rds.amazonaws.com"
dmysql="mysql -hdest-56.us-east-1.rds.amazonaws.com"

dbin=$(echo $dblist|sed "s/ /','/g")

_echo() {
   echo "$(date +%Y-%m-%d_%H_%M_%S) INFO $1"
}

# List tables, split into $parallel parts that we will dump and import later
rm -rf table*
$smysql information_schema -BNe "SELECT CONCAT(TABLE_SCHEMA,'.',TABLE_NAME) FROM TABLES WHERE TABLE_SCHEMA IN ('${dbin}')" > tables.out
tcount=$(cat tables.out|wc -l)
maxlne=$((($tcount/$parallel)+1))
split -l$maxlne -a1 -d tables.out tablen
_echo "Will be dumping $tcount tables, $maxlne per thread"

_echo "Cleaning up databases on destination"
# Modify this if you actually want to DROP the databases from the destination first
for d in $dblist; do echo "$dmysql -e 'DROP DATABASE IF EXISTS $d; CREATE DATABASE $d'"; done

_echo "Starting parallel dump and imports"
> export.log
for t in $(ls|grep tablen); do ( ( bash export-tables.sh $t | tee -a export.log ) & ); done

After splitting the jobs into n files, you then fork processes using the script below to do the actual work.

#!/bin/bash
# export-tables.sh

smysql="-hsource-55.us-east-1.rds.amazonaws.com"
dmysql="mysql -hdest-56.us-east-1.rds.amazonaws.com"
tables=$1

_echo() {
   echo "$(date +%Y-%m-%d_%H_%M_%S) INFO $1"
}

for l in $(cat $tables); do
  d=$(echo $l|cut -d'.' -f1)
  t=$(echo $l|cut -d'.' -f2)
  _echo "Processing $d.$t"
  cmd="./ps/bin/mysqldump --order-by-primary --innodb-optimize-keys $smysql $d $t"
  $cmd | $dmysql $d
  _echo "... completed $d.$t"
done

And as bonus, here is how you can monitor for progress:

while true; do echo "$(date +%Y-%m-%d_%H_%M_%S) $(cat export.log |grep completed|wc -l)/$(cat tables.out|wc -l)"; sleep 5; done

This is exactly how we want mysqldump to be Morgan!

Votes:

You must be logged in with a MySQL account to vote on Planet MySQL entries. More information on PlanetMySQL voting.

Planet MySQL © 1995, 2014, Oracle Corporation and/or its affiliates   Legal Policies | Your Privacy Rights | Terms of Use

Content reproduced on this site is the property of the respective copyright holders. It is not reviewed in advance by Oracle and does not necessarily represent the opinion of Oracle or any other party.