Showing entries 1 to 10 of 66
10 Older Entries »
Displaying posts with tag: bug (reset)
Add Custom HTTP Basic Auth Entries to iCloud Keychain

While automatic filling of HTTP basic auth credentials works fine on the Mac, I have had significant trouble getting it to work on iOS devices. This is especially unfortunate, because while on the road I sometimes need to have a look at monitoring systems that have HTTP basic authentication enabled and that use long complex passwords. Should be easy with iCloud keychain, right? Yeah...

Opening the respective site in iOS Safari pops up the basic auth credentials dialog. It also shows the little key icon in the keyboard toolbar which gives access to iCloud keychain entries. However, while on the Mac the username and password fields are correctly populated, on iOS you cannot even see the respective entry.

Turns out, and I assume this is a bug in iOS Safari, the type of keychain entries shown is limited to "Web form password". Safari on the Mac stores the entry as "Internet Password" though, as can be seen in the Keychain Access …

[Read more]
Does the Meltdown Fix Affect Performance for MySQL on Bare Metal?

In this blog post, we’ll look at does the Meltdown fix affect performance for MySQL on bare metal servers.

Since the news about the Meltdown bug, there were a lot of reports on the performance hit from proposed fixes. We have looked at how the fix affects MySQL (Percona Server for MySQL) under a sysbench workload.

In this case, we used bare metal boxes with the following specifications:

  • Two-socket Intel(R) Xeon(R) CPU E5-2683 v3 @ 2.00GHz (in total 56 entries in /proc/cpuinfo)
  • Ubuntu 16.04
  • Memory: 256GB
  • Storage: Samsung SM863 1.9TB SATA SSD
  • Percona Server for MySQL 5.7.20
  • Kernel (vulnerable) 4.13.0-21
  • Kernel (with Meltdown fix) 4.13.0-25

Please note, the current kernel for Ubuntu 16.04 contains only a Meltdown fix, …

[Read more]
MySQL 5.7 reads all your binlog files more often than you think

After upgrading some of our slaves to latest 5.7, I have found  what looks like a serious regression introduced in MySQL 5.7.
A couple weeks ago I noticed that the error log file of one of our clusters, where I had implemented my in place transparent compression of binary logs,  was literally flooded by the following error:

[ERROR] Binlog has bad magic number;  It's not a binary log file that can be used by this version of MySQL

In the above setup this is  an harmless error, and it should only happen at server startup, where mysqld opens and reads all available binary log files.  The error is due to the fact that since files are now compressed, mysqld doesn't recognize them as valid - not an issue, as only older files are compressed, and only after …

[Read more]
when xtrabackup fails with 'read' returned OS error 0 after upgrading to 5.7

Here's something that has puzzled me for several weeks.
Right after migrating  MySQL from 5.6 to 5.7, we started experiencing random xtrabackup failures on some, but not all, of our slaves.
The failures were only happening when taking an incremental backup, and it would always fail on the same table on each slave, with errors similar to the following:

171106 13:00:33 [01] Streaming ./gls/C_GLS_IDS_AUX.ibd
InnoDB: 262144 bytes should have been read. Only 114688 bytes read. Retrying for the remaining bytes.
InnoDB: 262144 bytes should have been read. Only 114688 bytes read. Retrying for the remaining bytes.
InnoDB: 262144 bytes should have been read. Only 114688 bytes read. Retrying for the remaining bytes.
InnoDB: 262144 bytes should have been read. Only 114688 bytes read. Retrying for the remaining bytes.
InnoDB: 262144 bytes should have been read. Only 114688 bytes read. Retrying for …

[Read more]
SET PASSWORD will lock you out of your slave in a mixed 5.6/5.7 MySQL topology

Long time no post.... :-)
Here's something interesting.

Last week I decided to give MySQL 5.7 a try (yes, I am kinda conservative DBA...) and the very same day that I installed my first 5.7 replica I noticed that, after changing my own password on the 5.6 master, I could no longer connect to the 5.7 slave.

Very annoying, to say the least! So I went and dug out the root password (which we do not normally use) and when I connected to the slave I was surprised to see that my password's hash on the 5.7 slave was different than the hash on the 5.6 master. No wonder I couldn't connect....

A bit of research on the MySQL documentation and I understood that 5.7 introduced few changes around the way you work with users' passwords.  SET PASSWORD is now deprecated in favour of ALTER USER: see MySQL 5.7 Reference Manual …

[Read more]
When order of appearance of indexes matters in MySQL

Sometimes MySQL surprises you in ways you would have never imagined.

Would you think that the order in which the indexes appear in a table matters?
It does. Mind you, not the order of the columns - the order of the indexes.
MySQL optimizer can, in specific circumstances, take different paths, sometimes with nefarious effects.


Please consider the following table:

CREATE TABLE `mypartitionedtable ` (
  `HASH_ID` char(64) NOT NULL,
  `RAW_DATA` mediumblob NOT NULL,
  `EXPIRE_DATE` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
  KEY `EXPIRE_DATE_IX` (`EXPIRE_DATE`),
  KEY `HASH_ID_IX` (`HASH_ID`)
) ENGINE=TokuDB DEFAULT CHARSET=latin1 ROW_FORMAT=TOKUDB_UNCOMPRESSED
/*!50100 PARTITION BY RANGE (UNIX_TIMESTAMP(EXPIRE_DATE))
(PARTITION p2005 VALUES LESS THAN (1487847600) ENGINE = …

[Read more]
CVE-2016-6225: Percona Xtrabackup Encryption IV Not Being Set Properly

If you are using Percona XtraBackup with

xbcrypt

 to create encrypted backups, and are using versions older than 2.3.6 or 2.4.5, we advise that you upgrade Percona XtraBackup.

Note: this does not affect encryption …

[Read more]
MyRocks has some strange performance issues for index scans

The details on this issue are here:
https://github.com/facebook/mysql-5.6/issues/369

This test is very simple. I loaded the SSB (star schema benchmark) data for scale factor 20 (12GB raw data), added indexes, and tried to count the rows in the table.

After loading data and creating indexes, the .rocksdb data directory is 17GB in size.

A full table scan "count(*)" query takes less than four minutes, sometimes reading over 1M rows per second, but when scanning the index to accomplish the same count, the database can only scan around 2000 rows per second. The four minute query would take an estimated 1000 minutes, a 250x difference.

I have eliminated the type of CRC32 function (SSE vs non-SSE) by forcing the hardware SSE function by patching the code.

There seem to be problems with any queries …

[Read more]
MyRocks has some strange performance issues for index scans

The details on this issue are here:
https://github.com/facebook/mysql-5.6/issues/369

This test is very simple. I loaded the SSB (star schema benchmark) data for scale factor 20 (12GB raw data), added indexes, and tried to count the rows in the table.

After loading data and creating indexes, the .rocksdb data directory is 17GB in size.

A full table scan "count(*)" query takes less than four minutes, sometimes reading over 1M rows per second, but when scanning the index to accomplish the same count, the database can only scan around 2000 rows per second. The four minute query would take an estimated 1000 minutes, a 250x difference.

I have eliminated the type of CRC32 function (SSE vs non-SSE) by forcing the hardware SSE function by patching the code.

There seem to be problems with any queries …

[Read more]
If enforcing data retention via DROP PARTITION, better watch your auto_increment counter

One of the main tasks that any DBA has to deal with is certainly data retention. Especially when your developers like to store logging and tracking information inside the database. This is not something that I would recommend (don't try it at home!) but when you're late to the party (read: someone had taken this path before you joined) , then you'll have to live with it.

Data retention in MySQL is usually applied by partitioning the table by RANGE on a selected column, usually containing the row creation timestamp. New partitions are created in advance for the coming days or weeks or months, and a script is used that will run nightly and drop partitions that are older than a set retention.

There are many tools that can be used to automate data retention; I have chosen the excellent pdb-parted, a nice and cozy perl script that you can find in the PalominoDB repository on GitHub (since PalominoDB is no longer in existence, …

[Read more]
Showing entries 1 to 10 of 66
10 Older Entries »