Showing entries 1 to 10 of 386
10 Older Entries »
Displaying posts with tag: Percona (reset)
A year in MySQL Blogging – top blogs, summary and review

The year 2023 surely was a successful year in MySQL blogging. I managed to publish 24 MySQL blogs in total both personal and Percona blog. This post is a reflection…

The post A year in MySQL Blogging – top blogs, summary and review first appeared on Change Is Inevitable.

Avoiding a STOP SLAVE Crash with MTR in Percona Server older than 5.7.37-40

I am finalizing my Percona Live talk MySQL and Vitess (and Kubernetes) at HubSpot.  In this talk, I mentioned that I like that Percona is providing better MySQL with Percona Server.  This comes with a little inconvenience though: with improvements, sometimes comes regression.  This post is about such regression and a workaround I implemented some time ago (I should have shared it

MySQL “No space left on device from storage engine”

We have planned for archiving the data to improve the DB performance and to reclaim the space. We were evaluating Compression in InnoDB and TokuDB. To find out the best compression method. We started benchmarking the compression ratio between InnoDB and TokuDB.

Everything goes well for some time, but after a few hours got an error message that can’t able to insert the data into the TokuDB table due to storage being full. It is so strange and the host has enough free space.


Table structure:-

mysql> show create table mydbops.tokudb\G
*************************** 1. row ***************************
       Table: tokudb
Create Table: CREATE TABLE `tokudb` (
  `ID` int DEFAULT NULL,
  `Name` longtext,
  `Image` blob
) ENGINE=TokuDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci
1 row in set (2.18 sec)

mysql> show create table mydbops.innodb\G
*************************** 1. row …
[Read more]
Handling MySQL case sensitive column in pt-archiver

To copy the data of the particular column of the table to another table/server, We have an option to export the data as CSV and import the data back to a different table. But when the table size is large and we need to copy the data only for the required data to the target table will cause the load in the server since the table scanning is huge.

To overcome this, we have the pt-archiver copy the data from the source table to the destination as a whole or only for required columns. And also we can do this in a controlled manner as well. So there will be no performance impact even on the production time.

Source table structure :

mysql> show create table source\G
*************************** 1. row ***************************
       Table: source
Create Table: CREATE TABLE `source` (
  `id` int unsigned NOT NULL …
[Read more]
MySQL GTID Replication and lower_case_table_names

Error 'Table 'EMPLOYEES.POSITION' doesn't exist' on query. Default database: 'employees'. Query: 'ALTER TABLE EMPLOYEES.POSITION ADD COLUMN phone VARCHAR(15)' Interesting , table exist on slave server . But we are getting above error frequently and unable to broken the replication because database size is too big

We are having environment as GTID replication setup with windows server (Master) to Ubuntu Linux machine (Slave) . When we dig into all the findings , concluded its may be with case sensitivity issue . Lower_case_table_names variable value is same on both servers . But as per MySQL documents  Database and table names are not case-sensitive in Windows , but are case-sensitive in most varieties of Unix.Column,index,stored routine, and event names are …

[Read more]
Sensitive Data Cleaning with MasKING

Wow !!! Its easy too restore sensitive data without any fear !!!

Its really tough sometimes , we restored the sensitive data without knowing and test mails triggered to customer as $100 deducted from your account for purchase . Its strange scenario when we missed to cleansing the customer data in DEV Sandbox !!!

Yes MasKING sensitive / Credential is easy now !!! Reference : https://github.com/kibitan/masking

Just tried simple practice for masking the paymentdb table data with masking , Its working as expected

Step 1 : Installed latest Ruby version and masking using below commands . Before doing install , update the server with latest packages

rvm install ruby-2.6.3
gem install masking

[Read more]
Repair GTID Based Slave on Percona Cluster


Problem : 

We are running 5 node percona cluster on Ubuntu 16.04, and its configured with master-slave replication. Suddenly we got an alert for replica broken from slave server, which was earlier configured with normal replication 

We have tried to sync the data and configure the replication, unable to fix that immediately due to huge transactions and GTID enabled servers. So we have decided to follow with innobackupex tool, and problem fixed in 2 hours 
Followed all the steps from percona doc and shared the experience in my environment 
Steps involving to repair the broken Replication :
1.Backup master server  2.Prepare the backup  3.Restore and Configure the Replication 4Check Replication Status
1.Backup …

[Read more]
MySQL, MariaDB & Friends DevRoom CfP is now open !

Good news ! Once again, the MySQL, MariaDB & Friends Devroom has been accepted for FOSDEM’20‘s edition !!

This event is a real success story for the MySQL ecosystem; the content, the speakers and the attendees are growing every year.

FOSDEM 2020’s edition will take place 1st & 2nd February in Brussels and our MySQL, MariaDB & Friends devroom will run on Saturday 1st (may change). FOSDEM & MySQL/MariaDB is a love story started 20 years ago !

The committee selecting the content for our devroom is not yet created and if you want to be part of this experience, just send me an email (candidate at mysqlmariadbandfriends dot eu) before Oct 26th.

If you want to join the Committee you have to align with the following conditions:

  • planning to be present at FOSDEM
  • having a link …
[Read more]
Our recap of the Percona Live Conference in Austin

We were pleased to sponsor the Percona Live Conference in Austin this year: many thanks to the Percona Team for organising a smooth conference yet again!

This is the recap of our week in Texas!

At The Conference

This year’s conference was the first one not taking place in Santa Clara, CA, but rather in Austin, TX. This turned out to be a nice choice by Percona, as it meant that open source database users who may not have travelled to California in the past, were attracted to the new location; and Austin being the new hot spot for (tech) companies at the moment, a lot of “locals” seemed to have made the choice to attend the conference. It was great to meet many new faces as a result.

As Diamond Sponsors of the conference we were of course present with a booth in the exhibition hall, as well as with three talks.

And while the hotel looked slightly dystopian at night, it was in fact a nice and …

[Read more]
Exposing MyRocks Internals Via System Variables: Part 7, Use Case Considerations

(In the previous post, Part 6, we covered Replication.)

In this final blog post, we conclude our series of exploring MyRocks by taking a look at use case considerations. After all, having knowledge of how an engine works is really only applicable if you feel like you’re in a good position to use it.

Advantages of MyRocks

Let’s start by talking about some of the advantages of MyRocks.

Compression

MyRocks will typically do a good job of reducing the physical footprint of your data. As I mentioned in my previous post in this series about compression, you have the ability to configure compression down to the individual compaction layers for each column family. You also get the advantage of the fact that data isn’t updated once it’s written to disk. Compaction, which was …

[Read more]
Showing entries 1 to 10 of 386
10 Older Entries »