Showing entries 41 to 50 of 70
« 10 Newer Entries | 10 Older Entries »
Displaying posts with tag: mysqldump (reset)
As restoring a dump table into the MySQL master – you better get some sleep

Restoring a dump table into the MySQL master server can lead to serious replication delay. The massive inserts commands cause the Master and slaves to use most of their resources for replication. As a result, replication lag may increase dramatically (linear to the table size). To overcome the replication delay, caused by restoring the dump […]

Checked that MySQL backup log lately?

Running a MySQL backup and ensuring it completed successfully and backup files exist is not enough. In my B&R Quiz from Checked your MySQL recovery process recently? one important step is “Do you review your backup logs EVERY SINGLE day or have tested backup monitoring in place?”

This is what I found when reviewing a backup log for a client today.

mysqldump: Got error: 1142: SELECT,LOCK TABL command denied to user 'root'@'localhost' for table 'cond_instances' when using LOCK TABLES
mysqldump: Got error: 1142: SELECT,LOCK TABL command denied to user 'root'@'localhost' for table 'cond_instances' when using LOCK TABLES

The backup script was completing, backup files were in place (and are listed in the log file) however these errors were occurring.

Further investigation was less then one minute of …

[Read more]
MySQL data backup: going beyond mysqldump

A user on a linux user group mailing list asked about this, and I was one of the people replying. Re-posting here as I reckon it’s of wider interest.

> [...] tens of gigs of data in MySQL databases. > Some in memory tables, some MyISAM, a fair bit InnoDB. According to my > understanding, when one doesn’t have several hours to take a DB > offline and do dbbackup, there was/is ibbackup from InnoBase.. but now > that MySQL and InnoBase have both been ‘Oracle Enterprised’, said > product is now restricted to MySQL Enterprise customers.. > > Some quick searching has suggested Percona XtraBackup as a potential > FOSS alternative. > What backup techniques do people employ around these parts for backups > of large mixed MySQL data sets where downtime *must* be minimised? > > Has your backup plan ever been put to the test?

You should put it to the test regularly, not just when it’s needed. …

[Read more]
Setting up Master-Slave Replication with MySQL

Replication enables data from one MySQL server to be replicated on one or more other MySQL servers. Replication is mostly used as scale-out solution. In such a solution, all writes and updates take place on the master server, while reads take place on one or more slaves. This model is actually known as master-slave replication and this is the kind of replication that I will be setting up in this post.

Excluding databases from mysqldump

A question that came up during the MySQL track at the UKOUG conference in Birmingham was "Can I exclude only a few databases from mysqldump? Let's say that I have 50 databases, and I want to dump all of them, except a few."
As many know, mysqldump has an option to ignore specific tables. SO if you have 1,000 tables in a databases, you can tell mysqldump to dump all the tables except a few ones.
There is no corresponding option to exclude one or more databases.
However, if you know your command line tools, the solution is easy:
First, we get the list of all databases:

mysql -B -N -e 'show databases'
information_schema
employees
five
four
mysql
one
performance_schema
six
test
three
two

-B forces batch mode (no dashes box around the data), while -N gets the result without the headers.
Now, let's say that we want to exclude databases four, …

[Read more]
Why mysqldump is converting my tables from InnoDB to MyISAM?

First of all: mysqldump is not converting tables. It is something else. Here is the story:

One of my clients had a case when they were migrating to a new mysql server: they used mysqldump to export data from the old server (all InnoDB) and imported it to the new server. When finished, all the tables became MyISAM on the new server. So they asked me this question:
“Why mysqldump is converting my tables from InnoDB to MyISAM?”

First of all we made sure that the tables are InnoDB on the old server. It was true.
Second we run “show engines” on the new server:

+------------+---------+----------------------------------------------------------------+--------------+------+------------+
| Engine | Support | Comment | Transactions | XA | Savepoints |
+------------+---------+----------------------------------------------------------------+--------------+------+------------+
| …

[Read more]
An argument for not using mysqldump

I have a 5G mysqldump which takes 30 minutes to restore from backup.  That means that when the database reaches 50G, it should take 30x10=5 hours to restore.  Right?  Wrong.

Mysqldump recovery time is not linear.  Bigger tables, or tables with more indexes will always take more time to restore.

If I restore from a raw backup (LVM snapshot, xtrabackup, innodb hot backup), it is very easy to model how much longer recovery time will take:

Backup is 80G
Copy is at 70MB/s.
10G is already complete.
= ((80-10) * 1024)/70/60 = ~17 minutes

I can tell progress with mysqldump by monitoring the rate at which show global status like 'Handler_write'; increases and compare it to my knowledge of about how many rows are in each table.  But progress != a magic number like "17 minutes".  Not unless I do a lot of complex modeling.

I am …

[Read more]
Setting up slave, stripping indexes and changing engines, on the fly

Warning, the following is quite ugly, but does the job :)

A while back I needed to create an archive slave database from a half a terabyte myisam master and had space restrictions. I could not dump the db, load it, then drop keys (archive doesn’t support keys apart from a primary key on one column as of 5.1), alter engine etc (would take even longer than it took either way). So an ugly single liner came to mind and worked nicely too.

mysqldump -uuser -ppassword -h127.0.0.1 -P3306 dbname --master-data=1 | sed 's/ENGINE=MyISAM/ENGINE=archive/g' | grep -v '^ UNIQUE KEY' | grep -v '^ KEY' | perl -p0777i -e 's/,\n^\)/\n\)/mg' | mysql -uuser -ppassword -h127.0.0.1 -P3307 dbname

So what is it doing?
Broken down:
mysqldump -uuser -ppassword -h127.0.0.1 -P3306 dbname --master-data=1 –> extract the database with the master position
sed …

[Read more]
Easy MySQL: how to backup databases to a remote machine

Here’s a simple answer to a simple question. “How do I run a backup of MySQL to another machine without writing to the local server’s filesystem?” – this is especially useful if you are running out of space on the local server and cannot write a temporary file to the filesystem during backups.

Method one – this writes a remote file.
mysqldump [options] [db_name|--all-databases]| gzip -c | ssh user@host.com "cat > /path/to/new/file.sql.gz"

Method two – this writes directly into a remote mysql server
mysqldump [options] [db_name|--all-databases]| mysql --host=[remote host] –user=root –password=[pass] [db_name]

mysqldump each object separately

As a continuation to a previous blog post last week and inspired by Kedar I have created a small script to export tables, stored procedures, functions and views into their respective file. It works for multiple databases where you can specify a list of databases too and although things like events, triggers and such are still missing they are easily added.

It is especially useful to dump stored procedures separately since it is a lacking functionality in mysqldump.

I placed the script in mysql forge for anybody to use, provide feedback and possibly enhancements to it.

Showing entries 41 to 50 of 70
« 10 Newer Entries | 10 Older Entries »