Showing entries 1 to 4
Displaying posts with tag: archival (reset)
Circular Partitioning scheme for a fully Automated Archival / Purge process with Events

Partitioning The advantage of partitioning is that we can drop / truncate the partitions which is real quick and done in seconds. It does lock the table (metadata lock) during this operation but only for few seconds. That duration depends on the number of pages in use in the buffer pool. With MySQL 5.6 it is possible to exchange the partitions , which helps in moving archived data to another table with similar structure.
   
Fully Automated archival / purge 

Consider a fast growing table which is partitioned to help the queries and help quick archival/purge. The queries if specify the partitioning filter can limit the search to relevant partitions and can considerably reduce the query time. Purging old data to reclaim the storage space can be readily achieved by truncating the old partitions. Archival has multiple techniques but with MySQL 5.6 exchanging the partitions make archival a real …

[Read more]
Archival with Exchange Partitions MySQL 5.6

 If table is partitioned then that makes it easy to maintain. Table has grown so huge and the backups are just keep running long then probably you need to think of archival or purge.

Purge the data if you don't want data from old partitions just by doing truncate or drop of those partitions which completes momentarily without locking the table for a long time

Archival can be done couple of ways.
We can take mysqldump (preferably from a slave) with a where condition to filter out the data you don't want to copy. And then import the dump file to archive database. check this link out for mysqldump with where clause .. and once data is copied, the …

[Read more]
How to Archive Terabytes of data in few seconds


If a table is partitioned then that makes it easy to maintain. Table has grown so huge and the backups are just keep running long then probably you need to think of archival or purge.

Purge the data if you don't want data from old partitions just by doing truncate or drop of those partitions which completes momentarily without locking the table for a long time

Archival can be done couple of ways.
We can take mysqldump (preferably from a slave) with a where condition to filter out the data you don't want to copy. And then import the dump file to archive database. check this link out for mysqldump with where clause .. and once data is copied, …

[Read more]
Archival and Analytics - Importing MySQL data into Hadoop Cluster using Sqoop

May 16, 2014 By Severalnines

We won’t bore you with buzzwords like volume, velocity and variety. This post is for MySQL users who want to get their hands dirty with Hadoop, so roll up your sleeves and prepare for work. Why would you ever want to move MySQL data into Hadoop? One good reason is archival and analytics. You might not want to delete old data, but rather move it into Hadoop and make it available for further analysis at a later stage. 

 

In this post, we are going to deploy a Hadoop Cluster and export data in bulk from a Galera Cluster using Apache Sqoop. Sqoop is a well-proven approach for bulk data loading from a relational database into Hadoop File System. There is also Hadoop Applier available from …

[Read more]
Showing entries 1 to 4