Home |  MySQL Buzz |  FAQ |  Feeds |  Submit your blog feed |  Feedback |  Archive |  Aggregate feed RSS 2.0 English Deutsch Español Français Italiano 日本語 Русский Português 中文
Previous 30 Newer Entries Showing entries 31 to 55

Displaying posts with tag: mysqldump (reset)

Easy MySQL: how to backup databases to a remote machine
+3 Vote Up -1Vote Down

Here’s a simple answer to a simple question. “How do I run a backup of MySQL to another machine without writing to the local server’s filesystem?” – this is especially useful if you are running out of space on the local server and cannot write a temporary file to the filesystem during backups.

Method one – this writes a remote file.
mysqldump [options] [db_name|--all-databases]| gzip -c | ssh user@host.com "cat > /path/to/new/file.sql.gz"

Method two – this writes directly into a remote mysql server
mysqldump [options] [db_name|--all-databases]| mysql --host=[remote host] –user=root –password=[pass] [db_name]

mysqldump each object separately
+3 Vote Up -1Vote Down

As a continuation to a previous blog post last week and inspired by Kedar I have created a small script to export tables, stored procedures, functions and views into their respective file. It works for multiple databases where you can specify a list of databases too and although things like events, triggers and such are still missing they are easily added.

It is especially useful to dump stored procedures separately since it is a lacking functionality in mysqldump.

I placed the script in mysql forge for anybody to use, provide feedback and possibly enhancements to it.

Dumping DDL – mysqldump tables, stored procedures, events, triggers (separately)
+4 Vote Up -0Vote Down

If you like to keep your ddl backed up in some source management tool like svn or cvs and want to do it individually for stored procedures, events, triggers, tables and such rather than having a single file you can easily do so using the below. You could even include the –skip-dump-date or –skip-comments and use the below to compare ddl daily checking for alterations thus making sure you are aware of any ddl changes done on the database.

user=backup_user
password=`cat ~/.backup_password`
hostname=127.0.0.1
port=3306
dbname=test_db
path=/home/mysql/ddl
date=`date +%Y%m%d`

mysqldump -u$user -p$password -h$hostname -P$port --no-create-info --no-data --no-create-db --skip-opt $dbname > "$path"/"$dbname"_triggers_"$date".sql
mysqldump -u$user -p$password








  [Read more...]
Simple Backup Server
+1 Vote Up -1Vote Down
I have not written an article in a while, I partially blame it on the World Cup and my day job. The time has come to share some of my recent experiences with a neat project to provide several teams internally with current MySQL backups.

When faced with these types of challenges is my first step is to look into OSS packages and how can they be combined into an actual solution. It helps me understand the underlying technologies and challenges.

ZRM Backup

I have reviewed Zmanda's Recovery Manager for MySQL Community Edition in the Fall 2008 issue of MySQL magazine. It remains one of my favorite backup tools for MySQL since it greatly simplifies the task and configuration of MySQL backups taking care of most of the details. Its flexible reporting capabilities



  [Read more...]
Backup large databases with mysqldump by splitting up the backup files
+0 Vote Up -0Vote Down
The primary responsibility of MySQL professionals is to establish and run proper backup and recovery plans. The most used method to backup a MySQL database is the mysqldump utility. This mysqldump utility creates a backup file for one or more MySQL databases that consists of DDL/DML statements needed to recreate the databases with their data. To [...]
MySQLDump from A to B with pipes
+0 Vote Up -0Vote Down
Moving your data and tables around comes in many different flavours. The use of mysqldump is common practice to dump your data and schema out to a file. It is also possible to pipe your mysqldump into a 2nd server. Try the code below (adapting the users and passwords!) in a test environment;
$ mysqldump -u UserA -p p455w0rd --single-transaction --all-databases --host=Server1 | mysql -u UserA -p p455w0rd --host=Server2
As you can see from the command we are taking all the databases in a single transaction into Server2 from Server1. If you're not using transactional tables substitute the --single-transaction for --lock-all-tables to ensure you get a consistent copy.

Remember; You must be able to see the 'other' server over the network and there must be permissions set for remote access from your feeding Server. For large databases this technique may not be suitable



  [Read more...]
Simple Backup Restore Trick
+2 Vote Up -1Vote Down
I don't usually post these simple tricks, but it came to my attention today and it's very simple and have seen issues when trying to get around it. This one tries to solve the question: How do I restore my production backup to a different schema? It looks obvious, but I haven't seen many people thinking about it.

Most of the time backups using mysqldump will include the following line:

USE `schema`;

This is OK when you're trying to either (re)build a slave or restore a production database. But what about restoring it to a test server in a different schema?

The actual trick


Using vi (or similar) editors to edit the line will most likely result in the editor trying to load the whole backup file into memory, which might cause paging or even crash the server if the backup is big







  [Read more...]
A backup today saves you tomorrow
+0 Vote Up -0Vote Down

Whether you’re working with MySQL, MySQL Cluster, or any other RDBMS, every database with a requirement for persistent data should always have a backup. As a Production DBA you’re the insurance policy to safeguard the data. Bad things do happen. Backups are your safety net to ensure you always have a way to recover should the worst happen and the database becomes irreparable.

There are many ways to produce a consistent backup of MySQL, I have listed a few of the options available below; Remember backups are your safety net, failing to retrieve a consistent backup when you need it most can be a very career limiting move, so no matter what backup method you choose always test your backups!

Logical Backups
The ever popular mysqldump is a

  [Read more...]
Restoring XML-formatted MySQL dumps
+1 Vote Up -0Vote Down
EAVB_VFZUHIARHI To whom it may concern -

The mysqldump program can be used to make logical database backups. Although the vast majority of people use it to create SQL dumps, it is possible to dump both schema structure and data in XML format. There are a few bugs (#52792, #52793) in this feature, but these are not the topic of this post.

XML output from mysqldump

Dumping in XML format is done with the --xml or -X option. In addition, you should use the

  [Read more...]
MySQL: Partition-wise backups with mysqldump
+4 Vote Up -0Vote Down
To whom it may concern,

in response to a query from André Simões (also known as ITXpander), I slapped together a MySQL script that outputs mysqldump commands for backing up individual partitions of the tables in the current schema. The script is maintained as a snippet at MySQL Forge.

How it works

The script works by querying the

  [Read more...]
Log Buffer #178, a Carnival of the Vanities for DBAs
+2 Vote Up -1Vote Down
Dave Edwards has offered me to write this week's Log Buffer, and I couldn't help but jump at the opportunity. I'll dive straight into it.

Oracle

I'll start with Oracle, the dust of the Sun acquisition has settled, so maybe it's time to return our attention to the regular issues.

Lets start with Hemant Chitale's Common Error series and his Some Common Errors - 2 - NOLOGGING as a Hint explaining what to expect from NOLOGGING. Kamran Agayev offers us an insight into Hemant's personality with his Exclusive Interview with Hemant K Chitale. My favorite quote is:

Do you





  [Read more...]
MyDumpSplitter-Extract tables from mysqldump | MySQL dump splitter – shell script
+2 Vote Up -0Vote Down
A lot of articles have been written on splitting mysqldump (mysql dump splitter) and grab required tables. Long back when Shlomi had suggested a “sed” way, I actually shell scripted […]
Filtering mysqldump output
+6 Vote Up -1Vote Down



Several people have suggested a more flexible approach at mysqldump output in matter of user privileges.
When you dump the data structure for views, triggers, and stored routines, you also dump the permissions related to such objects, with the DEFINER clause.
It would be nice to have such DEFINER clauses removed, or even replaced with the appropriate users in the new database.

The mysqldump filter was created with this need in mind. It allows you to remove all DEFINER clauses and eventually replacing them with a better one.
For example:

mysqldump --no-data sakila | dump_filter --delete >









  [Read more...]
On restoring a single table from mysqldump
+3 Vote Up -0Vote Down

Following Restore one table from an ALL database dump and Restore a Single Table From mysqldump, I would like to add my own thoughts and comments on the subject.

I also wish to note performance issues with the two suggested solutions, and offer improvements.

Problem relevance

While the problem is interesting, I just want to note that it is relevant in very specific database dimensions. Too small – and it doesn’t matter how you solve it (e.g. just open vi/emacs and copy+paste). Too big – and it would not be worthwhile to restore from mysqldump anyway. I would suggest that the problem is interesting in the whereabouts of a few dozen GB worth of

  [Read more...]
Video: Building a MySQL Slave and Keeping it in Sync
+5 Vote Up -0Vote Down

Last night at the Boston MySQL User Group I presented on how to get a consistent snapshot to build a slave, how to use mk-table-checksum to check for differences between masters and slaves on an ongoing basis, and how to use tools such as mk-table-sync and mysqldump to sync the data if there are any discrepancies.

The slides are online at http://technocation.org/files/doc/slave_sync.pdf.

The video can be watched on youtube at http://www.youtube.com/watch?v=Un0wqYKmbWY or directly in your browser with the embedded player below:

Backup MySQL in a Second with ZFS
Employee +1 Vote Up -1Vote Down

MySQL backup soon becomes an important matter when the database is used in production. The pain-point comes from the fact that while backuping the database is not available to respond to client requests anymore. With mysqldump - the standard tool for performing MySQL backups - and a large database the operation can go over many tenth of minutes if not hours. If I am running my business on line this is simply not acceptable.

The classical approach to workaround this problem is to take advantage of MySQL replication. I set up a master/slave configuration where the slave acts as copy of the master. Then, when needed, I run mysqldump on the slave without any service interruption on the master.

But ZFS snapshosts bring a new straightforward approach that avoids the pain and the complexity of a master/slave replication.



  [Read more...]
Reasons to use innodb_file_per_table
+1 Vote Up -0Vote Down

When working with InnoDB, you have two ways for managing the tablespace storage:

  • Throw everything in one big file (optionally split).
  • Have one file per table.
  • I will discuss the advantages and disadvantages of the two options, and will strive to convince that innodb_file_per_table is preferable.

    A single tablespace

    Having everything in one big file means all tables and indexes, from all schemes, are ‘mixed’ together in that file.

    This allows for the following nice property: free space can be shared between different tables and different schemes. Thus, if I purge many rows from my log table, the now unused space can be occupied by new rows of any other table.

    This same nice property also translates to a not so nice one: data can be greatly fragmented across the

      [Read more...]
    Extracting a Database From a mysqldump File
    Employee +0 Vote Up -0Vote Down

    Restoring a single database from a full dump is pretty easy, using the mysql command line client’s --one-database option:

    mysql> mysql -u root -p --one-database db_to_restore < fulldump.sql

    But what if you don’t want to restore the database, you just want to extract it out of the dump file? Well, that happens to be easy as well, thanks to the magic of sed:

    shell> sed -n '/^-- Current Database: `test`/,/^-- Current Database: `/p' fulldump.sql > test.sql

    You just need to change “test” to be the name of the database you want extracted.

    Backing up permissions for individual databases
    Employee +0 Vote Up -0Vote Down

    Sometimes, you want to backup individual databases in MySQL to move to a different server. This part is easy using mysqldump:

    shell> mysqldump -u root -p --databases db1 db2 ... > backup.sql
    

    The problem is, what happens when you want to backup the permissions associated with these databases? Well, here are a few queries to help you out.

    -- Grab the users with global permissions, 
    -- with permissions to the databases you want, 
    -- and tables/stored procedures in it.
    mysql> SELECT u.* INTO OUTFILE '/tmp/user.txt'
    	FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"' ESCAPED BY '\\'
    	LINES TERMINATED BY '\n'
    	FROM
    		mysql.user u
    	WHERE
    		u.Select_priv='Y'
    	UNION
    	SELECT u.*
    	FROM
    		mysql.user u,
    		mysql.db d
    	WHERE
    		d.Db IN('db1', 'db2', ...) AND
    		d.User = u.user
    	UNION
    	SELECT u.*
    	FROM
    		mysql.user u,
    		mysql.tables_priv
      [Read more...]
    A review of SecoBackup Amazon S3 backups for MySQL
    +0 Vote Up -0Vote Down

    After I wrote about things you need to know about MySQL backups, a customer contacted me and asked me what I know about SecoBackup for MySQL. I see it has a very low cost and Percona has Amazon accounts for testing purposes, so I quickly downloaded s3sql_2.2.0.1-2.01_i386.deb, installed it, configured it, and gave it a whirl.

    Since I just want to see what it does to take a backup, I started up a sandbox running from /tmp/12345 and configured it to backup msandbox:msandbox@127.0.0.1 (I shut down my main mysqld on my laptop to make sure it can’t connect to the default instance).

    Then I configured a backup set and tried to take a backup. Right away I saw it isn’t full

      [Read more...]
    Restoring from a mysqldump into tables with triggers
    +0 Vote Up -0Vote Down
    This is actually old news, but I never thought to file a bug report (until now) or say anything to anyone about it. If you use mysqldump to dump and restore a MySQL table that has INSERT triggers, you can get different data in your restored database than you had when you dumped. [...]
    Parameters to use on mysqldump
    +0 Vote Up -0Vote Down
    mysqldump is commonly used for making a MySQL database backup or for setting up a replication. As in all mysql binaries, there are quite a few parameters to mysqldump. Some are just niceties but some flags are a must. Of course, choosing the parameters to use greatly depends on your requirements, database setup, network capacity etc. Here [...]
    Parallel mysqldump backup script available. Testers wanted.
    +0 Vote Up -0Vote Down

    Large databases, long mysqldump times, long waits for globally locked tables. These problems basically never go away when you rely on mysqldump with –all-databases or a list of databases, as it dumps schemas serially. I’m not going to explain serial vs parallel processing here since that’s a larger topic. Suffice to say that in these days of multi-core / multi-cpu servers we only make use of one processor’s core when we serially export databases using mysqldump. So, I have a new script that attempts to alleviate those issues and now I need testers to provide feedback/improvements.

     

    In order to keep some sanity when dealing with hundreds of database servers, the script takes care of the following:

  • low global locking time requirements: solved by parallel tasks / forked processes
  • backup file checking: with
  •   [Read more...]
    MySQL Trigger Woes
    +0 Vote Up -0Vote Down

    After a period of inactivity I was hacking back on a Drupal project, I had taken a mysql dump from a production platform and imported into my local dev setup, just to have some realistic data.

    All of a sudden some forms started failing with the following error:


    user warning: There is no 'user'@'nonlocalhost registered query: insert into blah (stuff,morestuff) values (x,y) in /var/vhost/drupal-tree/includes/database.mysql.inc on line 172.

    My Drupal data connection was correct and working for selects etc.. only a limited set of inserts failed.

    After some debugging I realised that the error was not Drupal related, running the same query on my MySQL console gave the same error.


    ERROR 1449 (HY000): There is no 'user'@'nonlocalhost' registered

    The error came from a trigger on the table I was inserting data into that





      [Read more...]
    Introducing MySQL Parallel Dump
    +0 Vote Up -0Vote Down

    A while ago Peter Zaitsev wrote about his wishes for mysqldump. These included multi-threaded dumps and "safe" dumps that would wait for a server to restart if it crashed, then keep dumping other tables. I've had sketches of this done for a while, but during this week I fleshed it out while writing about backup and recovery for our upcoming book.

    Previous 30 Newer Entries Showing entries 31 to 55

    Planet MySQL © 1995, 2014, Oracle Corporation and/or its affiliates   Legal Policies | Your Privacy Rights | Terms of Use

    Content reproduced on this site is the property of the respective copyright holders. It is not reviewed in advance by Oracle and does not necessarily represent the opinion of Oracle or any other party.