Showing entries 1 to 10 of 64
10 Older Entries »
Displaying posts with tag: mysql-and-variants (reset)
How To Inject an Empty XA Transaction in MySQL

If you are using XA transactions, then you’ve likely run into a few replication issues with the 2PCs (2 Phase Commits). Here is a common error we see in Percona’s Managed Services and a few ways to handle it, including injecting an empty XA transaction.

Last_Error: Error 'XAER_NOTA: Unknown XID' on query. Default database: 'punisher'. Query: 'XA COMMIT X'1a',X'a1',1'

What Does it Mean?

It means that replication has tried to commit an XID (XA transaction ID) that does not exist on the server. We can verify that it does not exist by checking:

replica1 [localhost:20002] {msandbox} ((none)) > XA RECOVER CONVERT XID;
+----------+--------------+--------------+--------+
| formatID | gtrid_length | bqual_length | data   |
+----------+--------------+--------------+--------+
|        1 |            1 |            1 | 0x2BB2 | …
[Read more]
How to Upgrade to MySQL 8.0 – Free Course at Percona University Online

MySQL 8.0 General Availability release was launched in April 2018, and since then there have been ten versions of MySQL 8 and Percona Server for MySQL released. The MySQL Community expressed a high opinion of the MySQL 8.0 advantages, so a lot of databases have been successfully upgraded to the new version. But many of them still need to be up to date.

Percona has prepared a free course “How to Upgrade to MySQL 8.0” that helps you with this task.

It is a series of useful videos for 3-4 minutes. At the end of the course, you can pass the QUIZ and get a certificate. 

Follow the link to take the course:  https://classroom.google.com/c/MTM2MDIyNDIzMDQy?cjc=zjsst4l

You can also join the course manually. Just open Google Classroom and click “Join class” and enter the code of the class “zjsst4l”. …

[Read more]
Data Consistency for RDS for MySQL: The 8.0 Version

In a previous blog post on Data Consistency for RDS for MySQL, we presented a workaround to manage run pt-table-checksum on RDS instances. However, if your instance is running a MySQL 8.0.X version, there’s a simpler way to check data consistency.

Starting with 8.0.1, MySQL introduced something called “Dynamic Privileges” which is a solution to grant more granulated privileges to the users, instead of the almighty SUPER privilege.

So what was the issue with pt-table-checksum and RDS again? Since there’s no SUPER privileges for any user, there was no way for the tool to change the binlog_format to STATEMENT… but not anymore.

The solution when using 8.0 is …

[Read more]
How Much Memory Does the Process Really Take on Linux?

One of the questions you often will be faced with operating a Linux-based system is managing memory budget. If a program uses more memory than available you may get swapping to happen, oftentimes with a terrible performance impact, or have Out of Memory (OOM) Killer activated, killing process altogether.

Before adjusting memory usage, either by configuration, optimization, or just managing the load, it helps to know how much memory a given program really uses.

If your system runs essentially a single user program (there is always a bunch of system processes) it is easy.  For example, if I run a dedicated MySQL server on a system with 128GB of RAM I can use “used” as a good proxy of what is used and “available” as what can still be used.

root@rocky:/mnt/data2/mysql# free -h …
[Read more]
Checking Data Consistency for RDS for MySQL

MySQL for RDS and DBaaS, in general, are very controlled environments by the vendors, meaning that there are missing things like a SUPER grant for the root user (and any user in general). This has some implications on operations, one of them being the impossibility of running pt-table-checksum to verify data consistency between a primary and its replicas.

However, there’s a workaround that might overcome this situation and involves three things:

  • The pt-table-checksum itself
  • A way to collect executed queries
  • And the last one, which can be controversial, is to remove the read-only from the replica and use a maintenance window to stop traffic to the database while pt-table-checksum runs.

The problem with RDS is that you cannot change binlog_format to STATEMENT, which is one of the requirements for pt-table-checksum to run.

The workaround consists of capturing the executed …

[Read more]
ProxySQL Binary Search Solution for Rules

We sometimes receive challenging requests… this is a story about one of those times.

The customer has implemented a sharding solution and would like us to review alternatives or improvements. We analyzed the possibility of using ProxySQL as it looked to be a simple implementation. However, as we had 200 shards we had to implement 200 rules — the first shard didn’t have much overload, but the latest one had to go through 200 rules and took longer.

My first idea was to use FLAGIN and FLAGOUT creating a B-Tree, but the performance was the same. Reviewing the code, I realized that the rules were implemented as a list, which means that, in the end, all the rules were going to be processed until hit with the right one and FLAGIN is used just to filter out.

At that point, I asked, what could I do? Is it possible to implement it differently? What is the performance impact?

One Problem, Two Solutions

I …

[Read more]
ProxySQL Overhead — Explained and Measured

ProxySQL brings a lot of value to your MySQL infrastructures such as Caching or Connection Multiplexing but it does not come free — your database needs to go through additional processing traffic which adds some overhead. In this blog post, we’re going to discuss where this overhead comes from and measure such overhead. 

Types of Overhead and Where it Comes From 

There are two main types of overhead to consider when it comes to ProxySQL — Network Overhead and Processing Overhead. 

Network Overhead largely depends on where you locate ProxySQL. For example, in case you deploy ProxySQL on the separate host (or hosts) as in this diagram: 

The application will have added network latency for all requests, compared to …

[Read more]
More on Checkpoints in InnoDB MySQL 8

Recently I posted about checkpointing in MySQL, where MySQL showed interesting “wave” behavior.

Soon after Dimitri posted a solution with how to fix “waves,” and I would like to dig a little more into proposed suggestions, as there are some materials to process.

This post will be very heavy on InnoDB configuration, so let’s start with the basic configuration for MySQL, but before that some initial environment.

I use MySQL version 8.0.21 on the hardware as described here

As for the storage, I am not using some “old dusty SSD”, but production available Enterprise-Grade Intel SATA SSD …

[Read more]
MySQL 8.0.19 InnoDB ReplicaSet Configuration and Manual Switchover

InnoDB ReplicaSet was introduced from MySQL 8.0.19. It works based on the MySQL asynchronous replication. Generally, InnoDB ReplicaSet does not provide high availability on its own like InnoDB Cluster, because with InnoDB ReplicaSet we need to perform the manual failover. AdminAPI includes the support for the InnoDB ReplicaSet. We can operate the InnoDB ReplicaSet using the MySQL shell. 

  • InnoDB cluster is the combination of MySQL shell and Group replication and MySQL router
  • InnoDB ReplicaSet is the combination of MySQL shell and MySQL traditional async replication and MySQL router

Why InnoDB ReplicaSet?

  • You can manually perform the switchover and failover with InnoDB ReplicaSet
  • You can easily add the new node to your replication environment. InnoDB ReplicaSet helps with data provisioning (using MySQL clone plugin) and setting up the replication.

In this …

[Read more]
Creating an External Replica of AWS Aurora MySQL with Mydumper

Oftentimes, we need to replicate between Amazon Aurora and an external MySQL server. The idea is to start by taking a point-in-time copy of the dataset. Next, we can configure MySQL replication to roll it forward and keep the data up-to-date.

This process is documented by Amazon, however, it relies on the mysqldump method to create the initial copy of the data. If the dataset is in the high GB/TB range, this single-threaded method could take a very long time. Similarly, there are ways to improve the import phase (which can easily take 2x the time of the export).

Let’s explore some tricks to significantly improve the speed of this process.

Preparation Steps

The first step is to enable binary logs in Aurora. Go to the Cluster-level parameter group and make sure binlog_format

[Read more]
Showing entries 1 to 10 of 64
10 Older Entries »