In a MySQL 5.7 master-slave setup that uses the default semisynchronous replication setting for rpl_semi_sync_master_wait_point, a crash of the master and failover to the slave is considered to be lossless. However, when the crashed master comes back, you may find that it has transactions that are not present in the current master (which was previously a slave). This behavior may be puzzling, given that semisynchronous replication is supposed to be lossless, but this is actually an expected behavior in MySQL. Why exactly this happens is explained in full detail in the …[Read more]
10 Older Entries »
Please join Percona’s Principal Support Escalation Specialist Sveta Smirnova as she presents Troubleshooting Best Practices: Monitoring the Production Database Without Killing Performance on Wednesday, June 27th at 11:00 AM PDT (UTC-7) / 2:00 PM EDT (UTC-4).
During the MySQL Troubleshooting webinar series, I covered many monitoring and logging tools such as:
- General, slow, audit, binary, error log files
- Performance Schema
- Information Schema
- System …
In my previous post, I write that Write Set is not only in MySQL 8.0 but also in MySQL 5.7 though a little hidden. In this post, I describe Write Set in 5.7 and this will bring us in the inner-working of Group Replication. I am also using this opportunity to explain and show why members of a group can replicate faster than a standard slave. We will also see the impacts, on Group Replication,
After upgrading some of our slaves to latest 5.7, I have
found what looks like a serious regression introduced in
A couple weeks ago I noticed that the error log file of one of our clusters, where I had implemented my in place transparent compression of binary logs, was literally flooded by the following error:
[ERROR] Binlog has bad magic number; It's not a binary log file that can be used by this version of MySQL
In the above setup this is an harmless error, and it should only happen at server startup, where mysqld opens and reads all available binary log files. The error is due to the fact that since files are now compressed, mysqld doesn't recognize them as valid - not an issue, as only older files are compressed, and only after …
TL;DR: unless you know what you are doing, you should always have a primary key on your tables when replicating in RBR (and maybe even all the time).
TL;DR2: MariaDB 10.1 has an interesting way to protect against missing a primary key (innodb_force_primary_key) but it could be improved.
A few weeks ago, I was called off hours because replication delay on all the slaves from a replication chain
MySQL 8.0.1 is out and it includes an implementation of my feature request (Bug #77438). This extension to RESET MASTER allows to simplify master promotion with Binlog Servers. Let's see how it works:
# mysql -N <<< "SHOW MASTER STATUS" binlog.027892 3006935 # mysql -N <<< "RESET MASTER TO 12345; DO sleep(rand()*10); SHOW MASTER STATUS" binlog.012345 92773 # mysql -N <<< "RESET MASTER TO
Any DBA who has administered a busy master knows how fast the
disk space occupied by binary logs may grow. DBAs have no
control on this: the growth depends on the workload, and the
workload depends on many factors, e.g.:
- application changes (the applications start writing more due to code changes) - traffic changes (the peak season arrive, your workload doubles in size) - infrastructure changes (the devops add more servers) - business changes (new business flows adds to existing workload)
So either you have being thoughtful and have planned in advance for a large enough storage space (to handle the increase in number of binary logs), or, sooner or later, you will face the usual dilemma - how many retention days dare you give up to accommodate for the binlog growth?
In my case, I was very thoughtful, but the boss didn't listen and gave me servers with very limited binlog storage space and, more important, …
Another day at the office...
"Whoa, the write workload on our statistical cluster has suddendly increased by 20% and the filesystem that holds the binary logs is no longer large enough".
Of course, I had warned the boss about this possibility when I received those servers with that tiny 250G filesystem for binlogs, but my red flag was just ignored as usual.
So here we are, presto I get this new shiny 600G LUN, but we need to stop the damn MySQL server in order to repoint the log_bin variable to the new storage area.
Dunno about you, but the idea of waking up at 2am to just perform a variable change is not something that makes me particularly happy. Not to mention the maintenance period that is needed around it....
So, I decided to investigate a bit about the possibilities to do such change without stopping the service.
As we all know, the log_bin …
Reminder: MTS = Multi-Threaded Slave.
Update 2017-04-17: since the publication of this post, many things happened:
the procedure for fixing a crashed slave has been automated (Bug#77496) Bug#80103 as been closed at the same time as Bug#77496 but I still think there are unfixed things, see Bug#81840
End of update 2017-04-17.
I will be talking about parallel replication at FOSDEM in Brussel on
A common way to implement point in time recovery capability is:
to regularly do a full backup of a database, and to save the binary logs of that database (or from its master if doing backups on a slave).
When point in time recovery is required you need to:
restore a backup, and apply the binary logs up to the point of recovery.
(Step # 2 and # b above are the ones that will be simplified
10 Older Entries »