MySQL 8.0 and newer change and improve how we measure and monitor replication lag. Even though multi-threaded replication (MTR) has been on by default for the last three years (since v8.0.27 released October 2021), the industry has been steeped in single-threaded replication for nearly 30 years. As a result, replication lag with MTR is a complicated topic because it depends on version, configuration, and more. This three-part series provides a detailed understanding, starting from what was originally an unrelated feature: binary log group commit.
Just thought I’d share a script I use daily and helps me redirect my attention if needed.
This is but a mere pointer, guideline and starting point in any task. I just thought I’d share and hope someone else’s day becomes slightly easier thanks to some brief investigation and command tweaking.
Now the really handy thing here is that I only hard code the router01 node name, as I’m using that as a potential endpoint (thinking cloud, XaaS, etc…) where it could also be a VIP, LBR or similar. It’s the entry point so I can query the P_S table error_log so I can get different views and act accordingly.
For example:
- First, give me the InnoDB Cluster ordered server list so I can take a step back from my usual pains and worries, and see the architecture view. And make me type “Y” or similar to move on. Here if there were any server missing, I’d see the summary right away so I don’t really need to …
Replication has been the core functionality, allowing high availability in MySQL for decades already. However, you may still encounter replication errors that keep you awake at night. One of the most common and challenging to deal with starts with: “Got fatal error 1236 from source when reading data from binary log“. This blog post is […]
Replication being slow—replication lag—is a common complaint, but MySQL replication is actually really fast. Let’s run a controlled experiment and peek inside the Performance Schema binary logs to see why.
Replication being slow—replication lag—is a common complaint, but MySQL replication is actually really fast. Let’s run a controlled experiment and peek inside the Performance Schema and binary logs to see why.
Replication being slow—replication lag—is a common complaint, but MySQL replication is actually really fast. Let’s run a controlled experiment and peek inside the Performance Schema and binary logs to see why.
TL;DR: Make sure to run “SET persist_only disabled_storage_engines=’MyISAM’, persist sql_generate_invisible_primary_key=ON;” on all instances and restart each one in your MySQL InnoDB Cluster.
Ok, what does “safe from naughtiness” mean?:
– Anyone creating tables that aren’t InnoDB, as this doesn’t make
sense, after all, it is an “InnoDB” cluster.
– Making sure all tables have a Primary Key (invisible or
not).
– Making sure that my (invisible) primary keys are visible to the
cluster as it will rightfully complain if they aren’t!
This basically means that once you’ve got it all up and running you won’t run into those horrible situations whereby someone, somewhere, creates a MyISAM table that didn’t have a Primary Key and thus leave you with a broken cluster.
Eg.
MySQL rtnode-01:3306 ssl JS > vlc.status()
{
"clusterName": "VLC",
"clusterRole": "PRIMARY", …
[Read more]
Maintaining a production dataset at a manageable size can present a considerable challenge during the administration of a MySQL InnoDB Cluster.
Old Days
Back in the day when we only had one main copy of our data (the
source), and one read copy (the replica) that we used to look at
current and old data from our main system, we used a special
trick to remove data without affecting the replica. The trick was
to turn off writes to the binary log for our removal commands in
the main system. External tools like pt-archiver were also able to use that trick. To
stop bypass writing into the binary log, we used the command:
SET SQL_LOG_BIN=0
.
This mean that on the main production server (replication source), we were purging the data without writing the delete operation into the binary logs:
Current Days
These …
[Read more]Some time ago, we saw how we could deploy WordPress on OCI using MySQL HeatWave Database Service with Read Replicas. We had to modify WordPress to use a specific plugin that configures the Read/Write Splitting on the application (WordPress): LudicrousDB.
Today, we will not modify WordPress to split the Read and Write operations, but we will use MySQL Router 8.2.0 (see [1], [2], [3]).
Architecture
The …
[Read more]As a MySQL database administrator, you’re likely familiar with the SHOW REPLICA STATUS command. It is an important command for monitoring the replication status on your MySQL replicas. However, its output can be overwhelming for beginners, especially regarding the binary log coordinates. I have seen confusion amongst new DBAs on which binary log file and position represent what in the replication.
In this guide, we’ll simplify the SHOW REPLICA STATUS output, focusing on the critical binary log coordinates essential for troubleshooting and managing replication.
The key binlog coordinates
Before we delve into the output, let’s understand the key binlog coordinates we’ll be working with:
- Master_Log_File: This is the name of the primary binary log file that the I/O thread is currently reading from.
- Read_Master_Log_Pos: It represents the …