Showing entries 1 to 10 of 246
10 Older Entries »
Displaying posts with tag: Backup (reset)
Webinar Wednesday January 18, 2017: Lessons from Database Failures

Join Percona’s Chief Evangelist Colin Charles on Wednesday, January 18, 2017, at 7:00 am (PST) / 10:00 am (EST) (UTC-8) as he presents “Lessons from Database Failures.”

MySQL failures at scale can teach a great deal. MySQL failures can lead to a discussion about such topics as high availability (HA), geographical redundancy and automatic failover. In this webinar, Colin will present case study material (how automatic failover caused Github to go offline, why Facebook uses assisted failover rather than fully automated failover, and other scenarios) to look at how the MySQL world is making things better. One way, for example, is using …

[Read more]
How to Replace MySQL with Percona Server on a CPanel, WHM VPS or Dedicated Server

In this blog post, we’ll look at how to replace MySQL with Percona Server for MySQL on a CPanel, WHM VPS or dedicated server.

In general, CPanel and WHM have been leaning towards support of MariaDB over other flavors. This is partly due to the upstream repos replacing the MySQL package with MariaDB (for example, on CentOS).

MySQL 5.6 is still supported though, which means they are keeping support for core MySQL products. But if you want to get some extra performance enhancements or enterprise features for free, without getting too many bells and whistles, you might want to install Percona Server.

I’ve done this work on a new dedicated server with the latest WHM and CPanel on CentOS 7, with MySQL 5.6 installed. …

[Read more]
Archiving MySQL and MongoDB Data

This post discusses archiving MySQL and MongoDB data, and determining what, when and how to archive data.

Many people store infrequently used data. This data is taking up storage space and might make your database slower than it could be. Archiving data can be a huge benefit, both regarding the performance impact and storage savings.

Why archive?

One of the reasons for archiving data is freeing up space on your database volumes. You can store archived data on slower, less expensive storage devices, and current data on the faster database drives. Archiving old data makes backups and restores run faster since they need to process less data. Last, but by no means least, archiving data has the benefit of making your queries perform more efficiently since they do not need to process through old …

[Read more]
Using Percona XtraBackup on a MySQL Instance with a Large Number of Tables

In this blog post, we’ll find out how to use Percona XtraBackup on a MySQL instance with a large number of tables.

As of Percona Xtrabackup 2.4.5, you are required to have enough open files to open every single InnoDB tablespace in the instance you’re trying to back up. So if you’re running innodb_file_per_table=1, and have a large number of tables, you’re very likely to see Percona XtraBackup fail with the following error message:

InnoDB: Operating system error number 24 in a file operation.
InnoDB: Error number 24 means 'Too many open files'
InnoDB: Some operating system error numbers are described at http://dev.mysql.com/doc/refman/5.7/en/operating-system-error-codes.html
InnoDB: File ./sbtest/sbtest132841.ibd: 'open' returned OS error 124. Cannot continue operation
InnoDB: Cannot …
[Read more]
MySQL Enterprise Backup (MEB) and Oracle Storage Cloud

MEB 3.12.0 and above support cloud backup and restore using OpenStack-compatible object stores ("Swift"). This allows MySQL database users with Oracle Storage Cloud account to take backups and store them directly in the cloud and restore them from there.

The following steps illustrate how to set up and use MEB with Oracle Storage Cloud :

1) Create Oracle Storage Cloud account at https://cloud.oracle.com/storage . Once service gets activated, make a note of the following credentials that will be required in further steps :

  • Username

  • Password

  • Identity domain name

  • Service Instance Name : Customer-specified name of the service instance

[Read more]
How to install and configure AutoMySQLBackup

In this blog article we will show you how to install AutoMySQLBackup on a Linux VPS. AutoMySQLBackup is very useful utility for creating daily, weekly or monthly backups of one or more MySQL databases from one of more MySQL servers. It dumps the databases and compress them in to archives.it comes with many features such as:   Email notification of backups   Backup Compression and Encryption   Configurable backup rotation   Incremental database backups As usual, log in to your server as user root ssh root@IP and execute the following command to make sure that all services ar up to […]

MySQL-Docker operations. - Part 2: Customizing MySQL in Docker


Previous Episodes:


After seeing the basics of deploying a MySQL server in Docker, in this article we will lay the foundations to customising a node and eventually using more than one server, so that we can cover replication in the next one.
Enabling GTID: the dangerous approach.To enable GTID, you need to set five variables in the database server:

  • master-info-repository=table
  • relay-log-info-repository=table
  • enforce-gtid-consistency
  • gtid_mode=ON
  • log-bin=mysql-bin

For MySQL 5.6, you also need to set log-slave-updates, but we won't deal with such ancient versions here.
Using the method …

[Read more]
Binlog Servers for Simplifying Point in Time Recovery

A common way to implement point in time recovery capability is:

to regularly do a full backup of a database, and to save the binary logs of that database (or from its master if doing backups on a slave).

When point in time recovery is required you need to:

restore a backup, and apply the binary logs up to the point of recovery.

(Step # 2 and # b above are the ones that will be simplified

TwinDB Really Loves Backups

A week or two ago one of my former colleagues (at Percona) Jevin Real gave a talk titled Evolving Backups Strategy, Deploying pyxbackup at Percona Live 2015 in Amsterdam. I think Jervin raised some very good points about where MySQL backup solutions in general fall short. There are definitely a lot of tools and scripts out there that claim to do MySQL backups correctly, but don’t actually do it correctly. What I am more interested though is in measuring TwinDB against the points that Jervin highlighted to see if TwinDB falls short too.

Dependencies

We distribute TwinDB agent as a package that can be installed using the standard OS package management system. For example, using YUM on CentOS, RHEL and Amazon Linux, or using APT …

[Read more]
MySQL 5.7 : Playing with mysqlpump

MySQL 5.7 comes with a new backup tool, named mysqlpump, which is almost the same as mysqldump with the ability of extracting data in parallel threads.

I tried a little experiment. Using a server containing 11 databases, with a total of 300 tables and about 20 million rows (roughly ≈ 10GB,) I used both mysqldump and mysqlpump to get a backup.

mysqldump --all-databases  > dump.sql
mysqlpump --all-databases \
--add-drop-database --add-drop-table --skip-watch-progress \
--default-parallelism=10 \
--parallel-schemas=db,db1,db2 \
--parallel-schemas=db3,db4,db5 \
--parallel-schemas=db6,db7,db8 \
--parallel-schemas=db9,db10 > pump.sql

The backup with mysqldump took 3 minutes and 33 seconds. The one with mysqlpump took 2 minutes and …

[Read more]
Showing entries 1 to 10 of 246
10 Older Entries »