I am finalizing my Percona Live talk MySQL and Vitess (and Kubernetes) at HubSpot. In this talk, I mentioned that I like that Percona is providing better MySQL with Percona Server. This comes with a little inconvenience though: with improvements, sometimes comes regression. This post is about such regression and a workaround I implemented some time ago (I should have shared it
10 Older Entries »
We have planned for archiving the data to improve the DB performance and to reclaim the space. We were evaluating Compression in InnoDB and TokuDB. To find out the best compression method. We started benchmarking the compression ratio between InnoDB and TokuDB.
Everything goes well for some time, but after a few hours got an error message that can’t able to insert the data into the TokuDB table due to storage being full. It is so strange and the host has enough free space.
mysql> show create table mydbops.tokudb\G *************************** 1. row *************************** Table: tokudb Create Table: CREATE TABLE `tokudb` ( `ID` int DEFAULT NULL, `Name` longtext, `Image` blob ) ENGINE=TokuDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci 1 row in set (2.18 sec) mysql> show create table mydbops.innodb\G *************************** 1. row …[Read more]
To copy the data of the particular column of the table to another table/server, We have an option to export the data as CSV and import the data back to a different table. But when the table size is large and we need to copy the data only for the required data to the target table will cause the load in the server since the table scanning is huge.
To overcome this, we have the pt-archiver copy the data from the source table to the destination as a whole or only for required columns. And also we can do this in a controlled manner as well. So there will be no performance impact even on the production time.
Source table structure :
mysql> show create table source\G *************************** 1. row *************************** Table: source Create Table: CREATE TABLE `source` ( `id` int unsigned NOT NULL …[Read more]
Error 'Table 'EMPLOYEES.POSITION' doesn't exist' on query.
Default database: 'employees'. Query: 'ALTER TABLE
EMPLOYEES.POSITION ADD COLUMN phone VARCHAR(15)' Interesting
, table exist on slave server . But we are getting above error
frequently and unable to broken the replication because database
size is too big
We are having environment as GTID replication setup with windows server (Master) to Ubuntu Linux machine (Slave) . When we dig into all the findings , concluded its may be with case sensitivity issue . Lower_case_table_names variable value is same on both servers . But as per MySQL documents Database and table names are not case-sensitive in Windows , but are case-sensitive in most varieties of Unix.Column,index,stored routine, and event names are …
Wow !!! Its easy too restore sensitive data without any fear
Its really tough sometimes , we restored the sensitive data without knowing and test mails triggered to customer as $100 deducted from your account for purchase . Its strange scenario when we missed to cleansing the customer data in DEV Sandbox !!!
Yes MasKING sensitive / Credential is easy now !!! Reference : https://github.com/kibitan/masking
Just tried simple practice for masking the paymentdb table data with masking , Its working as expected
Step 1 : Installed latest Ruby version and masking using below commands . Before doing install , update the server with latest packages
rvm install ruby-2.6.3
gem install masking
We are running 5 node percona cluster on Ubuntu 16.04, and its configured with master-slave replication. Suddenly we got an alert for replica broken from slave server, which was earlier configured with normal replication
We have tried to sync the data and configure the replication, unable to fix that immediately due to huge transactions and GTID enabled servers. So we have decided to follow with innobackupex tool, and problem fixed in 2 hours
Followed all the steps from percona doc and shared the experience in my environment
Steps involving to repair the broken Replication :
1.Backup master server 2.Prepare the backup 3.Restore and Configure the Replication 4Check Replication Status
Good news ! Once again, the MySQL, MariaDB & Friends Devroom has been accepted for FOSDEM’20‘s edition !!
This event is a real success story for the MySQL ecosystem; the content, the speakers and the attendees are growing every year.
FOSDEM 2020’s edition will take place 1st & 2nd February in Brussels and our MySQL, MariaDB & Friends devroom will run on Saturday 1st (may change). FOSDEM & MySQL/MariaDB is a love story started 20 years ago !
The committee selecting the content for our devroom is not yet created and if you want to be part of this experience, just send me an email (candidate at mysqlmariadbandfriends dot eu) before Oct 26th.
If you want to join the Committee you have to align with the following conditions:
- planning to be present at FOSDEM
- having a link …
We were pleased to sponsor the Percona Live Conference in Austin this year: many thanks to the Percona Team for organising a smooth conference yet again!
This is the recap of our week in Texas!
At The Conference
This year’s conference was the first one not taking place in Santa Clara, CA, but rather in Austin, TX. This turned out to be a nice choice by Percona, as it meant that open source database users who may not have travelled to California in the past, were attracted to the new location; and Austin being the new hot spot for (tech) companies at the moment, a lot of “locals” seemed to have made the choice to attend the conference. It was great to meet many new faces as a result.
As Diamond Sponsors of the conference we were of course present
with a booth in the exhibition hall, as well as with three
And while the hotel looked slightly dystopian at night, it was in fact a nice and …[Read more]
(In the previous post, Part 6, we covered Replication.)
In this final blog post, we conclude our series of exploring MyRocks by taking a look at use case considerations. After all, having knowledge of how an engine works is really only applicable if you feel like you’re in a good position to use it.
Advantages of MyRocks
Let’s start by talking about some of the advantages of MyRocks.
MyRocks will typically do a good job of reducing the physical footprint of your data. As I mentioned in my previous post in this series about compression, you have the ability to configure compression down to the individual compaction layers for each column family. You also get the advantage of the fact that data isn’t updated once it’s written to disk. Compaction, which was …[Read more]
Today we’re happy to announce that we’ll be partnering with Datavail to provide solutions for continuous & highly available MySQL, Percona Server & MariaDB database operations based on Tungsten Clustering & Datavail Database Services.
Datavail is a renowned, tech-enabled data management, applications, business intelligence, and software solutions provider with a team of 700+ DBAs that look after customers’ database environments.
What are we aiming for?
Together we’re looking to continue to drive momentum in supporting rapid MySQL & MariaDB based application deployments as well as highly available and scalable database implementations for existing and future customers.
This new …[Read more]
10 Older Entries »