Showing entries 7876 to 7885 of 44868
« 10 Newer Entries | 10 Older Entries »
MyRocks: migrating a large MySQL dataset from InnoDB to RocksDB to reduce footprint

I have been following Facebook's MyRocks project (and Mark Callaghan's blog) for a long time. The idea of an LSM based engine for MySQL is actually a great idea.
We all know that InnoDB sucks at INSERTs.  BTree in general sucks when it's about insertion speed, and the more rows you insert, the more it sucks at it. There are many blog posts on the web that shows the insert speed degradation in InnoDB when the amount of rows in the table grows. Things get much worse faster if your primary key is a random key, for example an UUID.
We hit this problem with our caching servers (yes, we do caching with MySQL!), and in order to be able to scale these servers up we moved since a couple years to the TokuDB engine with great success. TokuDB is based on fractal tree technology, and guarantees the same insert speed, no matter the number of rows you have in the table; furthermore, it …

[Read more]
MyRocks has some strange performance issues for index scans

The details on this issue are here:
https://github.com/facebook/mysql-5.6/issues/369

This test is very simple. I loaded the SSB (star schema benchmark) data for scale factor 20 (12GB raw data), added indexes, and tried to count the rows in the table.

After loading data and creating indexes, the .rocksdb data directory is 17GB in size.

A full table scan "count(*)" query takes less than four minutes, sometimes reading over 1M rows per second, but when scanning the index to accomplish the same count, the database can only scan around 2000 rows per second. The four minute query would take an estimated 1000 minutes, a 250x difference.

I have eliminated the type of CRC32 function (SSE vs non-SSE) by forcing the hardware SSE function by patching the code.

There seem to be problems with any queries …

[Read more]
MyRocks has some strange performance issues for index scans

The details on this issue are here:
https://github.com/facebook/mysql-5.6/issues/369

This test is very simple. I loaded the SSB (star schema benchmark) data for scale factor 20 (12GB raw data), added indexes, and tried to count the rows in the table.

After loading data and creating indexes, the .rocksdb data directory is 17GB in size.

A full table scan "count(*)" query takes less than four minutes, sometimes reading over 1M rows per second, but when scanning the index to accomplish the same count, the database can only scan around 2000 rows per second. The four minute query would take an estimated 1000 minutes, a 250x difference.

I have eliminated the type of CRC32 function (SSE vs non-SSE) by forcing the hardware SSE function by patching the code.

There seem to be problems with any queries …

[Read more]
RocksDB doesn't support large transactions very well

So I tried to do my first set of benchmarks and testing on RocksDB today, but I ran into a problem and had to file a bug:
https://github.com/facebook/mysql-5.6/issues/365

MySQL @ Facebook RocksDB appears to store at least 2x the size of the volume of changes in a transaction. I don't know how much space for the row + overhead there is in each transcation, so I'm just going to say 2x the raw size of the data changed in the transaction, as approximation. I am not sure how this works for updates either, that is whether old/new row information is maintained. If old/row data is maintained, then a pure update workload you would need 4x the ram for the given transactional changes. My bulk load was 12GB of raw data, so it failed as I have only 12GB of RAM in my test system.

The workaround (as suggested in the bug) is to set two configuration …

[Read more]
RocksDB doesn't support large transactions very well

So I tried to do my first set of benchmarks and testing on RocksDB today, but I ran into a problem and had to file a bug:
https://github.com/facebook/mysql-5.6/issues/365

MySQL @ Facebook RocksDB appears to store at least 2x the size of the volume of changes in a transaction. I don't know how much space for the row + overhead there is in each transcation, so I'm just going to say 2x the raw size of the data changed in the transaction, as approximation. I am not sure how this works for updates either, that is whether old/new row information is maintained. If old/row data is maintained, then a pure update workload you would need 4x the ram for the given transactional changes. My bulk load was 12GB of raw data, so it failed as I have only 12GB of RAM in my test system.

The workaround (as suggested in the bug) is to set two configuration …

[Read more]
Blog Series: MySQL Configuration Management

MySQL configuration management remains a hot topic, as I’ve noticed on numerous occasions during my conversations with customers.

I thought it might be a good idea to start a blog series that goes deeper in detail into some of the different options, and what modules potentially might be used for managing your MySQL database infrastructure.

Configuration management has been around since way before the beginning of my professional career. I, myself, originally began working on integrating an infrastructure with my colleagues using Puppet.

Why is configuration management important?

  • Reproducibility. It’s giving us the ability to provision any environment in an automated way, and feel sure that the new environment will contain …
[Read more]
Continuous MySQL backup validation: Restoring backups

Our system continuously tests our ability to restore our databases from backups, ensuring that we can quickly and reliably recover from an outage.

Running MySQL Cluster 7.5 in Docker, part 2

After the heady excitement of getting my first MySQL Cluster 7.5.4 set up nicely running in docker, I quickly discovered that I wanted to re-factor most of it, implement the bits I’d left out, and extend it more to meet some of my other testing needs, like being able to run multiple deployments of similar types in parallel for simple CI.

I’ve now released this as v1.0.

The output is a little different to before, but now it’s possible to set up multiple clusters, of different shapes if you like, on different docker networks. You simply provide a unique value for the new –base-network and –name parameters when using the …

[Read more]
MySQL Connector/NET 7.0.6 m5 development has been released

MySQL Connector/Net 7.0.6 is the third development release that expands cross-platform support to Linux and OS X when using Microsoft’s .NET Core framework. Now,
.NET developers can use the X DevAPI with .NET Core and Entity Framework Core
(EF Core) 1.0 to create server applications that run on Windows, Linux and OS X.
We are very excited about this change and really look forward to your feedback on it!

MySQL Connector/Net 7.0.6 is also the fifth development release of MySQL Connector/Net to add support for the new X DevAPI. The X DevAPI enables
application developers to write code that combines the strengths of the
relational and document models using a modern, NoSQL-like syntax that
does not assume previous experience writing traditional SQL.

To learn more about how to write applications using the X DevAPI, see

[Read more]
My slides of devops days Ghent, Belgium are now online

Today I delivered a session related on what is MySQL implementing to take the make the devops life easier.

You can find the slides below:

Showing entries 7876 to 7885 of 44868
« 10 Newer Entries | 10 Older Entries »