When trying to understand queries in the slow log, an interesting metric to look at is rows examined. Also, when trying to understand CPU spikes on a MySQL instance, the InnoDB Metric dml_reads and the global status Innodb_rows_read are interesting to check. In a similar way, when trying to gather information about which queries are loading the system, SUM_ROWS_EXAMINED from the
Sometimes, the MySQL Optimizer chooses a wrong plan, and a query that should execute in less than 0.1 second ends-up running for 12 minutes ! This is not a new problem: bugs about this can be traced back to 2014, and a blog post on the subject dates of 2015. But even if this is old news, because this problem recently came to my attention, it is a problem worth writing on.
This
In my previous post (Puzzled by MySQL Replication), I describe a weird, but completely documented, behavior of replication that had me scratching my head for hours because it was causing data corruption. I did not give too many details then as I also wanted allowing you to scratch your head if you wished. In this post, I describe this behavior in more details.
But first I need to
Recently, I was puzzled by MySQL replication ! Some weird, but completely documented, behavior of replication had me scratching my head for hours. I am sharing this war story so you can avoid losing time like me (and also maybe avoid corrupting your data when restoring a backup). The exact justification will come in a follow-up post, so you can also scratch your head trying
I am currently running tests with sysbench and dbdeployer on a Google Cloud Platform Debian 9 instance. I ran into an interesting limit and lifting it was not straightforward. I hope that by sharing this, you can avoid losing too much time.
The benchmark I am using is the insert benchmark. I am able to run it with 2048 threads but I am not able to run it with 4096 threads (and yes, it makes
War story of the day: do not use — or be very careful when using — the max_connections beta database flag on CloudSQL... because it has many bugs.
I was hit by this today: we set the max_connections flag to 8000 on a primary server a few days ago, we had a failover last night, and the flag was not set on the replica (bug #1).
Update 2019-05-03: there is another bug (bug #1.5). If you set the
A few days ago, I stepped into a trap ! This made me lose time for fixing things (and even more for writing this post...). In the hope that you will avoid my mistake, I am sharing this war story. I also obviously opened a bug, more about this below.
TL&DR: be careful when using CHANGE MASTER TO MASTER_DELAY = N: it might wipe your relay logs !
As written in the TL&DR, running CHANGE MASTER
TL;DR: unless you know what you are doing, you should always have a primary key on your tables when replicating in RBR (and maybe even all the time).
TL;DR2: MariaDB 10.1 has an interesting way to protect against missing a primary key (innodb_force_primary_key) but it could be improved.
A few weeks ago, I was called off hours because replication delay on all the slaves from a replication chain
Something interesting happened to me in the last days, and it is worth sharing. I was upgrading MariaDB (MySQL also impacted) to a new major version and mysql_upgrade showed something like this:
[...] Phase 4/7: Running 'mysql_fix_privilege_tables' ERROR 1062 (23000) at line 586: Duplicate entry 'schema-table_name#P#partition_name_truncated' for key 'PRIMARY' ERROR 1062 (23000) at line 590:
This post is part of the series "please do not ignore warnings in MySQL/MariaDB". The previous post of the series can be found here.
In this post, I will present why ignoring warnings made me lose time in upgrading MariaDB Server. I think this war story is entertaining to read and it is also worth presenting to people claiming that ignoring warnings is no big deal.
A few months ago, I was in