Showing entries 31 to 40 of 92
« 10 Newer Entries | 10 Older Entries »
Displaying posts with tag: Backups (reset)
Percona XtraBackup 1.5-Beta

Percona XtraBackup 1.5-Beta is now available for download.

This release adds additional functionality to Percona XtraBackup 1.4, the current general availability version of XtraBackup.

This is a beta release.

Functionality Added or Changed

  • Support for MySQL 5.5 databases has been implemented. (Yasufumi Kinoshita)
  • XtraBackup can now be built from the MySQL 5.1.52, MySQL 5.5.7, or Percona Server 5.1.53-12 code bases (fixes bug #683507). (Alexey Kopytov)
  • The program is now distributed as three separate binaries: 
    • xtrabackup - for use with Percona Server with the built-in InnoDB plugin
    • xtrabackup_51 - for use with MySQL 5.0 & 5.1 with built-in InnoDB
    • xtrabackup_55 - for use with MySQL 5.5 (this binary is not provided for the …
[Read more]
Percona XtraBackup 1.4

Percona XtraBackup 1.4 is now available for download.

Version 1.4 fixes problems related to incremental backups. If you do incremental backups, it's strongly recommended that you upgrade to this release.

Functionality Added or Changed

  • Incremental backups have changed and now allow the restoration of full backups containing certain rollback transactions that previously caused problems. Please see Preparing the Backups and the --apply-log-only option. (From innobackupex, the --redo-only option should be used). (Yasufumi …
[Read more]
Why do I recommend switching over from MyISAM to Innodb!

Although MyISAM has been the default storage engine for MySQL but its soon going to change with the release of MySQL server 5.5. Not only that, more and more people are shifting over to the Innodb storage engine and the reasons for that is the tremendous benefits, not only in terms of performance, concurrency, ACID-transactions, foreign key constraints, but also because of the way it helps out the DBA with hot-backups support, automatic crash recovery and avoiding data inconsistencies which can prove to be a pain with MyISAM. In this article I try to hammer out the reasons why you should move on to using Innodb instead of MyISAM.

Lost innodb tables, xfs and binary grep

Before I start a story about the data recovery case I worked on yesterday, here's a quick tip - having a database backup does not mean you can restore from it. Always verify your backup can be used to restore the database! If not automatically, do this manually, at least once a month. No, seriously - in most of the recovery cases I worked on, customers did have some sort of backup, but it just wasn't working, complete and what not. Someone set it up and never bothered to check if it still works after a while.

Anyway, this post is not really about the backups but rather about few interesting things I learned during last recovery case.

First, some facts about the system and how data was lost:

  • MySQL had a dedicated partition on XFS file system
  • Server was running innodb_file_per_table
  • There was a …
[Read more]
An argument for not using mysqldump

I have a 5G mysqldump which takes 30 minutes to restore from backup.  That means that when the database reaches 50G, it should take 30x10=5 hours to restore.  Right?  Wrong.

Mysqldump recovery time is not linear.  Bigger tables, or tables with more indexes will always take more time to restore.

If I restore from a raw backup (LVM snapshot, xtrabackup, innodb hot backup), it is very easy to model how much longer recovery time will take:

Backup is 80G
Copy is at 70MB/s.
10G is already complete.
= ((80-10) * 1024)/70/60 = ~17 minutes

I can tell progress with mysqldump by monitoring the rate at which show global status like 'Handler_write'; increases and compare it to my knowledge of about how many rows are in each table.  But progress != a magic number like "17 minutes".  Not unless I do a lot of complex modeling.

I am …

[Read more]
Easy MySQL: how to backup databases to a remote machine

Here’s a simple answer to a simple question. “How do I run a backup of MySQL to another machine without writing to the local server’s filesystem?” – this is especially useful if you are running out of space on the local server and cannot write a temporary file to the filesystem during backups.

Method one – this writes a remote file.
mysqldump [options] [db_name|--all-databases]| gzip -c | ssh user@host.com "cat > /path/to/new/file.sql.gz"

Method two – this writes directly into a remote mysql server
mysqldump [options] [db_name|--all-databases]| mysql --host=[remote host] –user=root –password=[pass] [db_name]

Why you can’t rely on a replica for disaster recovery

A couple of weeks ago one of my colleagues and I worked on a data corruption case that reminded me that sometimes people make unsafe assumptions without knowing it. This one involved SAN snapshotting that was unsafe.

In a nutshell, the client used SAN block-level replication to maintain a standby/failover MySQL system, and there was a failover that didn't work; both the primary and fallback machine had identically corrupted data files. After running fsck on the replica, the InnoDB data files were entirely deleted.

When we arrived on the scene, there was a data directory with an 800+ GB data file, which we determined had been restored from a SAN snapshot. Accessing this file caused a number of errors, including warnings about accessing data outside of the partition boundaries. We were eventually able to coax the filesystem into truncating the data file back to a size that didn't contain invalid pointers and could be read without …

[Read more]
Recover BLOB fields

For a long time long types like BLOB, TEXT were not supported by Percona InnoDB Recovery Tool. The reason consists in a special way InnoDB stores BLOBs.

An InnoDB table is stored in a clustered index called PRIMARY. It must exist even if a user hasn't defined the primary index. The PRIMARY index pages are identified by 8-bytes number index_id. The highest 4 bytes are always 0, so index_id is often notated as o:<4 bytes number>, e.g. 0:258. The pages are ordered in a B-tree. Primary index is used as a key. Inside a page records are stored in a linked list.

InnoDB page by default is 16k. Obviously if a record is too long, a single page can't store it. If the total record size is less than UNIV_PAGE_SIZE/2 - 200 (this is roughly 7k) then the full record is stored in the page of PRIMARY index. Let's call it internal. In InnoDB sources they have type FIL_PAGE_INDEX*. If the record is longer than 7k bytes, only first 768 bytes of …

[Read more]
A backup today saves you tomorrow

Whether you’re working with MySQL, MySQL Cluster, or any other RDBMS, every database with a requirement for persistent data should always have a backup. As a Production DBA you’re the insurance policy to safeguard the data. Bad things do happen. Backups are your safety net to ensure you always have a way to recover should the worst happen and the database becomes irreparable.

There are many ways to produce a consistent backup of MySQL, I have listed a few of the options available below; Remember backups are your safety net, failing to retrieve a consistent backup when you need it most can be a very career limiting move, so no matter what backup method you choose always test your backups!

Logical Backups
The ever popular mysqldump is a backup and export utility provided with the MySQL binaries …

[Read more]
Kontrollkit – new version available for download

Just a quick notice to let everyone know that there is a new version of Kontrollkit available. There are some required bug fixes to the formerly new python backup script and some Solaris compatible changes to the various my.cnf files. You can download the new version here: http://kontrollsoft.com/software-downloads, or here: http://code.google.com/p/kontrollkit/

Showing entries 31 to 40 of 92
« 10 Newer Entries | 10 Older Entries »