Home |  MySQL Buzz |  FAQ |  Feeds |  Submit your blog feed |  Feedback |  Archive |  Aggregate feed RSS 2.0 English Deutsch Español Français Italiano 日本語 Русский Português 中文
Showing entries 1 to 30 of 54 Next 24 Older Entries

Displaying posts with tag: mysqldump (reset)

Introducing backup locks in Percona Server
+1 Vote Up -0Vote Down

TL;DR version: The backup locks feature introduced in Percona Server 5.6.16-64.0 is a lightweight alternative to FLUSH TABLES WITH READ LOCK and can be used to take both physical and logical backups with less downtime on busy servers. To employ the feature with mysqldump, use mysqldump --lock-for-backup --single-transaction. The next release of Percona XtraBackup will also be using backup locks automatically if the target server supports the feature.

Now on to the gory details, but let’s start with some history.

In the beginning…

In the beginning there was FLUSH TABLES, and users messed with their MyISAM tables under a live server and were not ashamed. Users could do nice things like:

mysql> FLUSH TABLES;
# execute myisamchk, myisampack, backup / restore some


  [Read more...]
Tricks with mysqldump
+0 Vote Up -0Vote Down

More than once I have wished that mysqldump was never created. In many backup scenarios it is better to use other tools such as xtrabackup. However, there is still times when I end up using mysqldump. For one thing it is included with the mysql packages. It's "always there". Secondly, it sometimes is really a good fit for the tasks. And occassionally, with a bit of work, you can make it do neat tricks. 

I'm going to cover a few of the tricks I have used over the years. Hopefully you will find them helpful. In some cases one of these might even save your bacon.

Trick One:

One of the annoying issues with mysqldump is its lack of flexibility. It's very easy to dump all databases (--all-databases option), or one or more databases (--databases). Even dumping a single table isn't too difficult. But what about all the tables that start with "a"? You

  [Read more...]
Migrating between MySQL schemas with Percona Xtrabackup
+0 Vote Up -0Vote Down

Recently, I was working with a client that asked about using Percona Xtrabackup to take a snapshot of a particular MySQL schema and then reload it with a different schema name on the same server.  It caught me off guard because I’d never really thought about it – typically, I’ve used Xtrabackup simply to clone a server (for replication or migration) or migrate an existing schema to a new server.  However, given the import/export functionality of Xtrabackup combined with Percona

  [Read more...]
mysqldump now safe for InnoDB!
Employee +6 Vote Up -0Vote Down

It's now safe to backup InnoDB with mysqldump

Before MySQL 5.6, running mysqldump to make backup of InnoDB tables could cause your backup to 'loose' some data. The problem is described in our manual  here.

In latest MySQL 5.6 this is no longer a problem, this means you no longer risk 'loosing' data when using mysqlbackup together with DDL statements on your InnoDB tables. If you are interested in metadata-locking (MDL) you can read more about MDL  here.

To test this we need to create a few tables and also look at which order mysqldump processes tables.

mysql> CREATE DATABASE ted;
mysql> USE ted;
mysql> CREATE TABLE `a` (`i`







  [Read more...]
MySQL alternative Percona Server 5.1.68 -14.6 now available
+1 Vote Up -0Vote Down

Percona Server for MySQL version 5.1.68-14.6

Percona is glad to announce the release of Percona Server  5.1.68 -14.6 on April 19, 2013 (downloads are available here and from the Percona Software Repositories). Based on MySQL 5.1.68, including all the bug fixes in it, Percona

  [Read more...]
MyISAM in a MySQL 5.6 InnoDB Only instance
+1 Vote Up -0Vote Down
With MySQL 5.5 the default storage engine was changed to InnoDB. This was a good step as most users expected MySQL to support transactions, row level locking and all the other InnoDB features, but with 5.1 they sometimes forgot to choose the right storage engine. As most databases don't have multiple storage engines and many MySQL users changed the default storage engine to InnoDB this made the switch to MySQL easier and the default behaviour more in line with what people expect from a relational database.

Changing the storage engine can of course be done on a per table or per instance basis (default-storage-engine in my.cnf). The temporary tables created with 'CREATE TEMPORARY TABLE ...' should not be forgotten. The performance of InnoDB or MyISAM for temporary tables can have quite some impact, especially with slow storage, a buffer pool which is too

  [Read more...]
Unexpected problem with triggers and mysqldump
+2 Vote Up -1Vote Down

Some time ago, I had to convert all tables of a database from MyISAM to InnoDB on a new server. The plan was to take a logical dump on the master, exporting separately the schema and the data, then edit the CREATE TABLE statements to ensure all tables are created with InnoDB, and reload everything on the new server.

Quite easy, isn’t it? Of course I wanted to run a test first before performing the actions on a live system.

So let’s play with the sakila database.

mysqldump has options to export schema and data separately, let’s use them:

# Export schema
$ mysqldump --no-data sakila > schema.sql

# Export data
$ mysqldump --no-create-info sakila > data.sql

Just to check that everything is fine, let’s reimport the data in a new database:

mysql> CREATE
  [Read more...]
Automating Backups
Employee +0 Vote Up -0Vote Down

If you need to automate backups, you might wonder about the different techniques available to you.

With regards to scheduling backups using built-in features of MySQL, you have two main options:

  • Either run mysqldump (or mysqlbackup if you have an Enterprise licence) from an operating system scheduler, for example in Linux using "cron" or in Windows using the "Task Scheduler". This is the most commonly used option.
  • Alternatively, use the Event Scheduler to perform a series of  SELECT ... INTO OUTFILE ... commands, one for each table you need to back up. This is a less commonly used option, but you might still find it useful.

Scheduling mysqlbackup with cron

mysqldump is a client program, so when you run it, you run it from a shell


  [Read more...]
5 Things You Overlooked with MySQL Dumps
+1 Vote Up -1Vote Down

Read the original article at 5 Things You Overlooked with MySQL Dumps

1. Point In Time Recovery If you’ve never done point in time recovery, it’s time to take a second look. With a standard mysqldump you restore your database to the time when the backup happened. Only do them once a day, then you can lose as much as 24 hours of data. Enter point-in-time recovery, [...]

For more articles like these go to Sean Hull's Scalable Startups

Related posts:
  • Ten things to remember about MySQL backups
  • Easy MySQL replication with hotbackups
  •   [Read more...]
    Encrypting your MySQL backups and more
    +4 Vote Up -0Vote Down

    Assuming you have a backup and recovery strategy in place, how secure is your data? Does a hacker need to obtain access to your production system bypassing all the appropriate security protection you have in place, or just the unencrypted data on the backup server?

    Encryption with zNcrypt

    The following steps demonstrate how I setup a mysqldump encrypted backup with zNcrypt, a product from Gazzang. You can request a free trial evaluation of the software from http://gazzang.com/request-a-trial. I asked for a AWS EC2 instance, and was able to provide my bootstrap instructions for OS and MySQL installation. Following installation and configuration, the first step is to verify the zNcrypt process is running:

    $ sudo ezncrypt-service status
      ezncrypt |
      [Read more...]
    What compression do you use?
    +4 Vote Up -0Vote Down

    The following is an evaluation of various compression utilities that I tested when reviewing the various options for MySQL backup strategies. The overall winner in performance was pigz, a parallel implementation of gzip. If you use gzip today as most organizations do, this one change will improve your backup compression times.

    Details of the test:

    • The database is 5.4GB of data
    • mysqldump produces a backup file of 2.9GB
    • The server is an AWS t1.xlarge with a dedicated EBS volume for backups

    The following testing was performed to compare the time and % compression savings of various available open source products. This was not an exhaustive test with multiple iterations and different types of data files.

    Compression
    Utility Compression Time
    (sec) Decompression Time
    (sec) New Size
    (% Saving) lzo



      [Read more...]
    How to exclude a database from your dump with ZRM (MySQL Community help needed)
    +1 Vote Up -0Vote Down

    Last month, Ronald Bradford, Giuseppe Maxia and Mark Leith spoke about how to simulate a mysqldump –ignore-database.
    This mysqldump option doesn’t exist and these three guys have given us various helpful solutions.

    But for those of us who use ZRM community to make backup with mysqldump, the –exclude-pattern seems to do the job :

    --exclude-pattern "pattern" 
    All databases or tables that match the pattern are not backed up. 
    If --all-databases or --databases are

      [Read more...]
    How to rename a database in MySQL?
    +0 Vote Up -0Vote Down

    Today we will talk how do you can easily rename a database in MySQL.
    At the MySQL version 5.1.7 was added a command to do this job named RENAME DATABASE

    RENAME {DATABASE | SCHEMA} db_name TO new_db_name;

    But it was removed on the MySQL version 5.1.23 because maybe it would result in data lost’s like you can see on this link of MySQL documentation

    Then, lets see some ways which we can do this:

    1. Using the RENAME TABLE command

    As long as two databases are on the same file system, you can use RENAME TABLE to move a table from one database to another:

    RENAME TABLE current_db.tbl_name TO other_db.tbl_name;


      [Read more...]
    Implementing mysqldump –ignore-database
    Employee +4 Vote Up -0Vote Down

    Ronald Bradford and Giuseppe Maxia (hey guys!) wrote about different ways to ignore a database when using mysqldump –all-databases over the past couple of days.

    Whilst the solutions are interesting, I wondered why not attack it from the proper approach, and add the option to mysqldump itself? Honestly, the patch is trivial, and doing anything against INFORMATION_SCHEMA with lots of databases and tables … well let’s just say … group_concat_max_len is the least of your worries..

    15 minutes later I had a working solution:

    To my surprise, I also

      [Read more...]
    A few hacks to simulate mysqldump --ignore-database
    +4 Vote Up -0Vote Down

    A few days ago, Ronald Bradford asked for a mysqldump –ignore-database option.

    As a workaround, he proposes:
    mysqldump --databases `mysql --skip-column-names \
    -e "SELECT GROUP_CONCAT(schema_name SEPARATOR ' ') \
    FROM information_schema.schemata WHERE schema_name \
    NOT IN ('mysql','performance_schema','information_schema');" \
    >` >/mysql/backup/rds2.sql

    It's a clever solution, but unfortunately it only works if you have a handful of schemas. If your databases happens to have several dozens (or hundreds or thousands) of schemas (which is where you need this option more), then the output will be truncated to the length of group_concat_max_len (by default, 1024.)

    There are two






      [Read more...]
    Vote for MySQL[plus] awards 2011 !
    +5 Vote Up -0Vote Down

    First of all, I wish you a happy new year.
    Many things happened last year, it was really exciting to be involved in the MySQL ecosystem.
    I hope this enthusiasm will be increased this year, up to you !

    To start the year, I propose the MySQL[plus] Awards 2011
    It will only take 5 minutes to fill out these polls.
    Answer with your heart first and then with your experience with some of these tools or services.

    Polls will be closed January 31, so, vote now !
    For “other” answers, please,  let me a comment with details.

    Don’t hesitate to submit proposal for tools or services in the comments.






      [Read more...]
    My slides of MySQL Meetup Viadeo / LeMUG Paris
    +1 Vote Up -0Vote Down

    I was glad to present how to schedule and monitor mysqldump with ZRM community last week in Paris as part of the MySQL Meetup Viadeo / LeMUG

    You can find my slides below, enjoy ! :

    MYSQLDUMP & ZRM COMMUNITY (EN) View more presentations from Cédric PEINTRE Thanks to Olivier and all the viadeo team in Paris for this event.
    And, of course, thanks to all attendees. We need more events like that in France ! Related Posts :
      [Read more...]
    Useful sed / awk liners for MySQL
    +1 Vote Up -0Vote Down

    Listing some useful sed / awk liners to use with MySQL. I use these on occasion.

    sed, awk & grep have many overlapping features. Some simple tasks can be performed by either. For example, stripping empty lines can be performed by either:

    grep '.'
    awk '/./'
    sed '/./!d'
    grep -v '^$'
    awk '!/^$/'
    sed '/^$/d'

    It's a matter of taste & convention which tool and variation to use. So for any script I suggest, there may be many variations, possibly cleaner, shorter; feel free to comment.

    mysqldump

    The output of mysqldump is in particular useful when one wishes to make transformation on data or metadata.

    • Convert
      [Read more...]
    Silent MyISAM Table Definition Changes and mysqldump
    +2 Vote Up -0Vote Down

    The other day while trying to move a schema from one MySQL server to another, I encountered a very odd issue. The schema to be moved contained both MyISAM and InnoDB tables, so the only option I had was to dump the schema using mysqldump on the source server and import it on the destination server. The dump on the source server went fine with absolutely no issues but it failed to import into the second server, and the error message was:

    Can't create/write to file ‘/disk1/activity.MYI’ (Errcode: 2)


    This was an extremely odd message as the data directory on the destination server was properly setup in terms of ownership and permission. The source and destination MySQL servers have been running without issues for months. Prior to the error, four tables in the dump file were imported into the destination server without any issues whatsoever.

      [Read more...]
    As restoring a dump table into the MySQL master – you better get some sleep
    +6 Vote Up -0Vote Down
    Restoring a dump table into the MySQL master server can lead to serious replication delay. The massive inserts commands cause the Master and slaves to use most of their resources for replication. As a result, replication lag may increase dramatically (linear to the table size). To overcome the replication delay, caused by restoring the dump [...]
    Checked that MySQL backup log lately?
    +4 Vote Up -1Vote Down

    Running a MySQL backup and ensuring it completed successfully and backup files exist is not enough. In my B&R Quiz from Checked your MySQL recovery process recently? one important step is “Do you review your backup logs EVERY SINGLE day or have tested backup monitoring in place?”

    This is what I found when reviewing a backup log for a client today.

    mysqldump: Got error: 1142: SELECT,LOCK TABL command denied to user 'root'@'localhost' for table 'cond_instances' when using LOCK TABLES
    mysqldump: Got error: 1142: SELECT,LOCK TABL command denied to user 'root'@'localhost' for table 'cond_instances' when using LOCK TABLES
    

    The backup script was completing, backup files were in place (and are listed in the log file) however these

      [Read more...]
    MySQL data backup: going beyond mysqldump
    +1 Vote Up -0Vote Down
    A user on a linux user group mailing list asked about this, and I was one of the people replying. Re-posting here as I reckon it’s of wider interest. > [...] tens of gigs of data in MySQL databases. > Some in memory tables, some MyISAM, a fair bit InnoDB. According to my > understanding, when one doesn’t have several hours to take a DB > offline and do dbbackup, there was/is ibbackup from InnoBase.. but now > that MySQL and InnoBase have both been ‘Oracle Enterprised’, said > product is now restricted to MySQL Enterprise customers.. > > Some quick searching has suggested Percona XtraBackup as a potential > FOSS alternative. > What backup techniques do people employ around these parts for backups > of large mixed MySQL data sets where downtime *must* be minimised? > > Has your backup plan ever been put to the test? You  [Read more...]
    Setting up Master-Slave Replication with MySQL
    +1 Vote Up -1Vote Down
    Replication enables data from one MySQL server to be replicated on one or more other MySQL servers. Replication is mostly used as scale-out solution. In such a solution, all writes and updates take place on the master server, while reads take place on one or more slaves. This model is actually known as master-slave replication and this is the kind of replication that I will be setting up in this post.
    Upgrading to Barracuda & getting rid of huge ibdata1 file
    +2 Vote Up -0Vote Down

    Some of this is old stuff, but more people are now converting to InnoDB plugin, so as to enjoy table compression, performance boosts. Same holds for people converting to Percona’s XtraDB. InnoDB plugin requires innodb_file_per_table. No more shared tablespace file.

    So your ibdata1 file is some 150GB, and it won’t reduce. Really, it won’t reduce. You set innodb_file_per_table=1, do ALTER TABLE t ENGINE=InnoDB (optionally ROW_FORMAT=COMPRESSED KEY_BLOCK_SIZE=8), and you get all your tables in file-per-table .ibd files.

    But the original ibdata1 file is still there. It has to be there, don’t delete it! It contains more than your old data.

    InnoDB tablespace files never reduce in size,

      [Read more...]
    Excluding databases from mysqldump
    +7 Vote Up -0Vote Down
    A question that came up during the MySQL track at the UKOUG conference in Birmingham was "Can I exclude only a few databases from mysqldump? Let's say that I have 50 databases, and I want to dump all of them, except a few."
    As many know, mysqldump has an option to ignore specific tables. SO if you have 1,000 tables in a databases, you can tell mysqldump to dump all the tables except a few ones.
    There is no corresponding option to exclude one or more databases.
    However, if you know your command line tools, the solution is easy:
    First, we get the list of all databases:
    mysql -B -N -e 'show databases'
    information_schema
    employees
    five
    four
    mysql
    one
    performance_schema
    six
    test
    three
    two
    -B forces batch mode (no dashes box around the data), while -N gets the result without the
















      [Read more...]
    Why mysqldump is converting my tables from InnoDB to MyISAM?
    +5 Vote Up -0Vote Down

    First of all: mysqldump is not converting tables. It is something else. Here is the story:

    One of my clients had a case when they were migrating to a new mysql server: they used mysqldump to export data from the old server (all InnoDB) and imported it to the new server. When finished, all the tables became MyISAM on the new server. So they asked me this question:
    “Why mysqldump is converting my tables from InnoDB to MyISAM?”

    First of all we made sure that the tables are InnoDB on the old server. It was true.
    Second we run “show engines” on the new server:

    +------------+---------+----------------------------------------------------------------+--------------+------+------------+
    | Engine | Support | Comment





      [Read more...]
    An argument for using mysqldump
    +3 Vote Up -0Vote Down

    I fully agree with Morgan’s An argument for not using mysqldump. This post does not come to contradict it, but rather shed a positive light on mysqldump.

    I usually prefer an LVM snapshot based backup, or using XtraBackup. And, with databases as large as dozens of GB and above, I consider mysqldump to be a poor alternative. Poor in runtime, poor in overhead while taking the backup.

    However once in a while I get to be reminded that mysqldump just works.

    As a recent example, I had a server which was killed after an ALTER TABLE statement hanged forever (table already ALTERed, but old scheme never dropped). The old table data still hanged around the file system, but was not recognized by InnoDB. Trying out DISCARD TABLESPACE

      [Read more...]
    An argument for not using mysqldump
    +2 Vote Up -0Vote Down

    I have a 5G mysqldump which takes 30 minutes to restore from backup.  That means that when the database reaches 50G, it should take 30x10=5 hours to restore.  Right?  Wrong.

    Mysqldump recovery time is not linear.  Bigger tables, or tables with more indexes will always take more time to restore.

    If I restore from a raw backup (LVM snapshot, xtrabackup, innodb hot backup), it is very easy to model how much longer recovery time will take:

    Backup is 80G
    Copy is at 70MB/s.
    10G is already complete.
    = ((80-10) * 1024)/70/60 = ~17 minutes

    I can tell progress with mysqldump by monitoring the rate at which show global status like 'Handler_write'; increases and compare it to my knowledge of about how many rows are in each table.  But progress != a magic number like "17





      [Read more...]
    Setting up slave, stripping indexes and changing engines, on the fly
    +4 Vote Up -0Vote Down

    Warning, the following is quite ugly, but does the job :)

    A while back I needed to create an archive slave database from a half a terabyte myisam master and had space restrictions. I could not dump the db, load it, then drop keys (archive doesn’t support keys apart from a primary key on one column as of 5.1), alter engine etc (would take even longer than it took either way). So an ugly single liner came to mind and worked nicely too.

    mysqldump -uuser -ppassword -h127.0.0.1 -P3306 dbname --master-data=1 | sed 's/ENGINE=MyISAM/ENGINE=archive/g' | grep -v '^ UNIQUE KEY' | grep -v '^ KEY' | perl -p0777i -e 's/,\n^\)/\n\)/mg' | mysql -uuser -ppassword -h127.0.0.1 -P3307 dbname

    So what is it doing?
    Broken down:
    mysqldump -uuser -ppassword -h127.0.0.1 -P3306 dbname

      [Read more...]
    Easy MySQL: how to backup databases to a remote machine
    +3 Vote Up -1Vote Down

    Here’s a simple answer to a simple question. “How do I run a backup of MySQL to another machine without writing to the local server’s filesystem?” – this is especially useful if you are running out of space on the local server and cannot write a temporary file to the filesystem during backups.

    Method one – this writes a remote file.
    mysqldump [options] [db_name|--all-databases]| gzip -c | ssh user@host.com "cat > /path/to/new/file.sql.gz"

    Method two – this writes directly into a remote mysql server
    mysqldump [options] [db_name|--all-databases]| mysql --host=[remote host] –user=root –password=[pass] [db_name]

    Showing entries 1 to 30 of 54 Next 24 Older Entries

    Planet MySQL © 1995, 2014, Oracle Corporation and/or its affiliates   Legal Policies | Your Privacy Rights | Terms of Use

    Content reproduced on this site is the property of the respective copyright holders. It is not reviewed in advance by Oracle and does not necessarily represent the opinion of Oracle or any other party.