A year ago, I blogged about An Unprivileged User can Crash your MySQL Server. At the time, I presented how to protect yourself against this problem without explaining how to generate a crash. In this post, I am revisiting this vulnerability, not giving the exploit yet, but presenting the fix. Also, because the default configuration of Group Replication in 5.7 is still vulnerable (it is not in
A few weeks ago and in MySQL 5.7, I had an ALTER TABLE that failed with a duplicate entry error. This is old news as it is happening since MySQL 5.6, but I only saw this recently because I am normally using online schema change from the Percona Toolkit (pt-osc) or GitHub's online schema migration (gh-ost). I do not like that and I am disappointed this has not been improved, so this post is
Yes, your read the title correctly: an unprivileged user can crash your MySQL Server. This applies for the default configuration of MySQL 8.0.21 (and it is probably the case for all MySQL 8 GA versions). Depending on your configuration, it might also be the case for MySQL 5.7. This needs malicious intent and a lot of determination, so no need to panic as this will not happen by accident. I am
In this blog post, we’ll look at does the Meltdown fix affect performance for MySQL on bare metal servers.
Since the news about the Meltdown bug, there were a lot of reports on the performance hit from proposed fixes. We have looked at how the fix affects MySQL (Percona Server for MySQL) under a sysbench workload.
In this case, we used bare metal boxes with the following specifications:
- Two-socket Intel(R) Xeon(R) CPU E5-2683 v3 @ 2.00GHz (in total 56 entries in /proc/cpuinfo)
- Ubuntu 16.04
- Memory: 256GB
- Storage: Samsung SM863 1.9TB SATA SSD
- Percona Server for MySQL 5.7.20
- Kernel (vulnerable) 4.13.0-21
- Kernel (with Meltdown fix) 4.13.0-25
Please note, the current kernel for Ubuntu 16.04 contains only a Meltdown fix, …
[Read more]
After upgrading some of our slaves to latest 5.7, I have
found what looks like a serious regression introduced in
MySQL 5.7.
A couple weeks ago I noticed that the error log file of one of
our clusters, where I had implemented my in place transparent compression of binary
logs, was literally flooded by the following
error:
[ERROR] Binlog has bad magic number; It's not a binary log
file that can be used by this version of MySQL
In the above setup this is an harmless error, and it should
only happen at server startup, where mysqld opens and
reads all available binary log files. The error is due to
the fact that since files are now compressed, mysqld
doesn't recognize them as valid - not an issue, as only older
files are compressed, and only after …
Here's something that has puzzled me for several weeks.
Right after migrating MySQL from 5.6 to 5.7, we started
experiencing random xtrabackup failures on some, but not all, of
our slaves.
The failures were only happening when taking an incremental
backup, and it would always fail on the same table on each slave,
with errors similar to the following:
171106 13:00:33 [01] Streaming ./gls/C_GLS_IDS_AUX.ibd
InnoDB: 262144 bytes should have been read. Only 114688 bytes
read. Retrying for the remaining bytes.
InnoDB: 262144 bytes should have been read. Only 114688 bytes
read. Retrying for the remaining bytes.
InnoDB: 262144 bytes should have been read. Only 114688 bytes
read. Retrying for the remaining bytes.
InnoDB: 262144 bytes should have been read. Only 114688 bytes
read. Retrying for the remaining bytes.
InnoDB: 262144 bytes should have been read. Only 114688 bytes
read. Retrying for …
Long time no post.... :-)
Here's something interesting.
Last week I decided to give MySQL 5.7 a try (yes, I am kinda
conservative DBA...) and the very same day that I installed my
first 5.7 replica I noticed that, after changing my own password
on the 5.6 master, I could no longer connect to the 5.7
slave.
Very annoying, to say the least! So I went and dug out the root
password (which we do not normally use) and when I connected to
the slave I was surprised to see that my password's hash on the
5.7 slave was different than the hash on the 5.6 master. No
wonder I couldn't connect....
A bit of research on the MySQL documentation and I understood
that 5.7 introduced few changes around the way you work with
users' passwords. SET PASSWORD is now deprecated in favour
of ALTER USER: see MySQL 5.7 Reference Manual …
Sometimes MySQL surprises you in ways you would have never
imagined.
Would you think that the order in which the indexes appear in a
table matters?
It does. Mind you, not the order of the columns - the order of
the indexes.
MySQL optimizer can, in specific circumstances, take different
paths, sometimes with nefarious effects.
Please consider the following table:
CREATE TABLE `mypartitionedtable ` (
`HASH_ID` char(64) NOT NULL,
`RAW_DATA` mediumblob NOT NULL,
`EXPIRE_DATE` timestamp NOT NULL DEFAULT
CURRENT_TIMESTAMP,
KEY `EXPIRE_DATE_IX` (`EXPIRE_DATE`),
KEY `HASH_ID_IX` (`HASH_ID`)
) ENGINE=TokuDB DEFAULT CHARSET=latin1
ROW_FORMAT=TOKUDB_UNCOMPRESSED
/*!50100 PARTITION BY RANGE (UNIX_TIMESTAMP(EXPIRE_DATE))
(PARTITION p2005 VALUES LESS THAN (1487847600) ENGINE = …
If you are using Percona XtraBackup with
xbcrypt
to create encrypted backups, and are using versions older than 2.3.6 or 2.4.5, we advise that you upgrade Percona XtraBackup.
Note: this does not affect encryption …
[Read more]
The details on this issue are here:
https://github.com/facebook/mysql-5.6/issues/369
This test is very simple. I loaded the SSB (star schema
benchmark) data for scale factor 20 (12GB raw data), added
indexes, and tried to count the rows in the table.
After loading data and creating indexes, the .rocksdb data
directory is 17GB in size.
A full table scan "count(*)" query takes less than four minutes,
sometimes reading over 1M rows per second, but when scanning the
index to accomplish the same count, the database can only scan
around 2000 rows per second. The four minute query would take an
estimated 1000 minutes, a 250x difference.
I have eliminated the type of CRC32 function (SSE vs non-SSE) by
forcing the hardware SSE function by patching the code.
There seem to be problems with any queries …