Showing entries 421 to 430 of 479
« 10 Newer Entries | 10 Older Entries »
Displaying posts with tag: mysql-and-variants (reset)
How Much Memory Does the Process Really Take on Linux?

One of the questions you often will be faced with operating a Linux-based system is managing memory budget. If a program uses more memory than available you may get swapping to happen, oftentimes with a terrible performance impact, or have Out of Memory (OOM) Killer activated, killing process altogether.

Before adjusting memory usage, either by configuration, optimization, or just managing the load, it helps to know how much memory a given program really uses.

If your system runs essentially a single user program (there is always a bunch of system processes) it is easy.  For example, if I run a dedicated MySQL server on a system with 128GB of RAM I can use “used” as a good proxy of what is used and “available” as what can still be used.

root@rocky:/mnt/data2/mysql# free -h …
[Read more]
Checking Data Consistency for RDS for MySQL

MySQL for RDS and DBaaS, in general, are very controlled environments by the vendors, meaning that there are missing things like a SUPER grant for the root user (and any user in general). This has some implications on operations, one of them being the impossibility of running pt-table-checksum to verify data consistency between a primary and its replicas.

However, there’s a workaround that might overcome this situation and involves three things:

  • The pt-table-checksum itself
  • A way to collect executed queries
  • And the last one, which can be controversial, is to remove the read-only from the replica and use a maintenance window to stop traffic to the database while pt-table-checksum runs.

The problem with RDS is that you cannot change binlog_format to STATEMENT, which is one of the requirements for pt-table-checksum to run.

The workaround consists of capturing the executed …

[Read more]
ProxySQL Binary Search Solution for Rules

We sometimes receive challenging requests… this is a story about one of those times.

The customer has implemented a sharding solution and would like us to review alternatives or improvements. We analyzed the possibility of using ProxySQL as it looked to be a simple implementation. However, as we had 200 shards we had to implement 200 rules — the first shard didn’t have much overload, but the latest one had to go through 200 rules and took longer.

My first idea was to use FLAGIN and FLAGOUT creating a B-Tree, but the performance was the same. Reviewing the code, I realized that the rules were implemented as a list, which means that, in the end, all the rules were going to be processed until hit with the right one and FLAGIN is used just to filter out.

At that point, I asked, what could I do? Is it possible to implement it differently? What is the performance impact?

One Problem, Two Solutions

I …

[Read more]
ProxySQL Overhead — Explained and Measured

ProxySQL brings a lot of value to your MySQL infrastructures such as Caching or Connection Multiplexing but it does not come free — your database needs to go through additional processing traffic which adds some overhead. In this blog post, we’re going to discuss where this overhead comes from and measure such overhead. 

Types of Overhead and Where it Comes From 

There are two main types of overhead to consider when it comes to ProxySQL — Network Overhead and Processing Overhead. 

Network Overhead largely depends on where you locate ProxySQL. For example, in case you deploy ProxySQL on the separate host (or hosts) as in this diagram: 

The application will have added network latency for all requests, compared to …

[Read more]
More on Checkpoints in InnoDB MySQL 8

Recently I posted about checkpointing in MySQL, where MySQL showed interesting “wave” behavior.

Soon after Dimitri posted a solution with how to fix “waves,” and I would like to dig a little more into proposed suggestions, as there are some materials to process.

This post will be very heavy on InnoDB configuration, so let’s start with the basic configuration for MySQL, but before that some initial environment.

I use MySQL version 8.0.21 on the hardware as described here

As for the storage, I am not using some “old dusty SSD”, but production available Enterprise-Grade Intel SATA SSD …

[Read more]
MySQL 8.0.19 InnoDB ReplicaSet Configuration and Manual Switchover

InnoDB ReplicaSet was introduced from MySQL 8.0.19. It works based on the MySQL asynchronous replication. Generally, InnoDB ReplicaSet does not provide high availability on its own like InnoDB Cluster, because with InnoDB ReplicaSet we need to perform the manual failover. AdminAPI includes the support for the InnoDB ReplicaSet. We can operate the InnoDB ReplicaSet using the MySQL shell. 

  • InnoDB cluster is the combination of MySQL shell and Group replication and MySQL router
  • InnoDB ReplicaSet is the combination of MySQL shell and MySQL traditional async replication and MySQL router

Why InnoDB ReplicaSet?

  • You can manually perform the switchover and failover with InnoDB ReplicaSet
  • You can easily add the new node to your replication environment. InnoDB ReplicaSet helps with data provisioning (using MySQL clone plugin) and setting up the replication.

In this …

[Read more]
Creating an External Replica of AWS Aurora MySQL with Mydumper

Oftentimes, we need to replicate between Amazon Aurora and an external MySQL server. The idea is to start by taking a point-in-time copy of the dataset. Next, we can configure MySQL replication to roll it forward and keep the data up-to-date.

This process is documented by Amazon, however, it relies on the mysqldump method to create the initial copy of the data. If the dataset is in the high GB/TB range, this single-threaded method could take a very long time. Similarly, there are ways to improve the import phase (which can easily take 2x the time of the export).

Let’s explore some tricks to significantly improve the speed of this process.

Preparation Steps

The first step is to enable binary logs in Aurora. Go to the Cluster-level parameter group and make sure binlog_format

[Read more]
Streaming Percona XtraBackup for MySQL to Multiple Destinations

Have you ever had to provision a large number of instances from a single backup? The most common use case is having to move to new hardware, but there are other scenarios as well. This kind of procedure can involve multiple backup/restore operations which can easily become a pain to administer. Let’s look at a potential way to make it easier using Percona Xtrabackup. The Percona XtraBackup tool provides a method of performing fast and reliable backups of your MySQL data while the system is running.

Leveraging Named Pipes

As per the Linux manual page, a FIFO special file (a named pipe) is similar to a pipe except that it is accessed as part of the filesystem. It can be opened by multiple processes for reading or writing.

For this …

[Read more]
Aligning Percona XtraBackup Versions with Percona Server for MySQL

We are excited to let you know about two upcoming changes to Percona XtraBackup which will align Percona XtraBackup Versions with Percona Server for MySQL.  These changes are to bring Percona XtraBackup release naming line with Percona Server and MySQL and ensure Percona XtraBackup executes complete backups.

The first is a change to the naming structure of releases.  This change is something we believe will help when working with Percona products and is designed to position Percona XtraBackup to be in sync with the current release naming strategy of Percona Server for MySQL and Percona XtraDB Cluster.

The current naming structure of …

[Read more]
Deadlock Troubleshooting in Percona Server for MySQL 5.7

Disclaimer: the following script only works for Percona Server for MySQL 5.7, and relies on enabling performance schema (PS) instrumentation which can add overhead on high concurrent systems, and is not intended for continuous production usage as it’s a POC (proof of concept).

Introduction

In Percona Support, we frequently receive tickets related to deadlocks and even though the deadlock concept is simple, troubleshooting might not be in all cases. 

As explained in How to Deal with MySQL Deadlocks, a deadlock occurs when two or more transactions mutually hold and request for locks, creating a cycle of dependencies. MySQL will detect deadlocks and kill one of the transactions (making it rollback), and the deadlock will be printed in SEIS (show engine innodb status). Limitations of using this approach …

[Read more]
Showing entries 421 to 430 of 479
« 10 Newer Entries | 10 Older Entries »