Travis-CI is a crucial component in Continuous Integration/Continuous Deployment. We use it a lot to run unit tests and building/uploading Python modules. Recently I had to solve a problem of building RPMs on Travis-CI with Docker containers. In this post I will describe step-by-step how to do that. We distribute our backup tool as RPM packages […]
Percona Live is a Christmas in MySQL world. It’s time when all friends and family gather over a glass of beer. Everyone is talking about achievements of the last year and make New Year resolutions for a next one.
There will be two talks from TwinDB this year. One is about data recovery and one – about backups.
The data recovery talk is a kind of traditional talk. I will
briefly cover InnoDB files format so you know where to look for
data. I will show how to recover data from two most popular
accidents: InnoDB tablespace corruption and
DROP DATABASE. Data recovery is
impossible without table structure recovery. I will show how to
get the structure from an .frm file or from InnoDB dictionary
DROP TABLE when the .frm file is gone). We
made a drastic improvement in user …
A week or two ago one of my former colleagues (at Percona) Jevin Real gave a talk titled Evolving Backups Strategy, Deploying pyxbackup at Percona Live 2015 in Amsterdam. I think Jervin raised some very good points about where MySQL backup solutions in general fall short. There are definitely a lot of tools and scripts out there that claim to do MySQL backups correctly, but don’t actually do it correctly. What I am more interested though is in measuring TwinDB against the points that Jervin highlighted to see if TwinDB falls short too.
We distribute TwinDB agent as a package that can be installed using the standard OS package management system. For example, using YUM on CentOS, RHEL and Amazon Linux, or using APT …[Read more]
Thank you for attending my July 15 webinar, “Creating Best in Class Backup solutions for your MySQL environment.” Due to the amount of content we discussed and some minor technical difficulties faced near the end of webinar we have decided to cover the final two slides of the presentation along with the questions asked by attendees during the webinar via this blog post.
The final two slides were about our tips for having a …[Read more]
Protecting information in databases and possibility to restore databases in case of need is the highest priority task in many companies. But not all DBMSs have built-in tools for data protection (tools to backup and restore databases). And MySQL is one of such DBMSs.
Making database backups is one of the most important things in the process of administrating MySQL databases, because some critical data loss can be irreplaceable.
The task of making daily mysql backup can be solved with the help of the backup database function of dbForge Studio for MySQL. To use it, you should setup backup in the wizard manually and schedule making backups.
To open the Database Backup wizard, you should choose Database → Backup Database from the main menu. …[Read more]
MySQL is frequently referred to as a database for Web applications. Partially it is really so, because MySQL became popular owing to its simplicity, high speed, and bounding with PHP. Developers of small Web projects often choose MySQL as a back end of their sites. Does this mean that MySQL can be used only for small databases? Not at all. There are lots of databases size of data in which is measured in gigabytes. Besides MySQL servers are frequently clustered to increase their performance. When a DBAs work with large amounts of data, they frequently have to make backup copies correctly and effectively, i. e. to export MySQL databases to SQL (or MySQL backup). It is extremely important to import MySQL database from SQL correctly is when restoring a corrupted database and when migrating a database from one server to another.
What should be taken into account when exporting a large MySQL …[Read more]
There are several ways to take backups (some good, some bad, and
some will depend on your situation). Here's the thought process I
use for choosing a backup strategy.
If your data set is small (I realize "small" is a relative term.. to qualify it, let's say
- backup everything or just certain databases or tables
- backup only the DDL
- optimize the dump for a faster restore
- make the resultant sql file more compatible with other
and many more things.
However, the most important options are related to the consistency of your backup. My favorite options are:
- --single-transaction : this option gives a consistent backup, if (and only if) the tables are using InnoDB storage engine. If you have any non-read-only MyISAM tables, then don't use this …
The Malta MySQL User Group (MMUG) met for the second time this Thursday, and compared to last time, we had a much better venue: Ixaris Systems let us use their board room, so we had all the tools we needed to have a good meeting.
We managed to get a group picture before everyone has arrived, so I guess we can call the people in this picture “early birds”.
Once we all arrived, however, Sandro Gauci from EnableSecurity gave us a very interesting talk on SQL Injection security, and general security flaws from a developer point a view. You can find the slides here: sql-injection.pdf.
Here’s a picture of Mr. Gauci while presenting. (Sorry for the obvious problem with the over-white picture — seems like I …[Read more]
I need help from my fellow mysql users.Â I know some of the people who read this are alot better then me with mysql so hopefully you can help
So today we decided that we are going to migrate one of our master database servers to new hardware.Â Since we got the hardware this morning and we wanted to move on to it asap, we decided that we will take our slave down, copy data from it, and bring it up on future master server.Â At that point, we will let it run as slave to the current master server until its time for us to take it down.Â Reason we did that instead of mysqldump/import was to avoid the lag mysqldump creates on our server.
After we did all this and put up the new master server, we started to notice odd issues.Â After looking around and comparing old db with new, we found out that new db was missing data.Â How it happened is beyond me and is the reason why I am writing this.Â …[Read more]