The details on this issue are here:
https://github.com/facebook/mysql-5.6/issues/369
This test is very simple. I loaded the SSB (star schema
benchmark) data for scale factor 20 (12GB raw data), added
indexes, and tried to count the rows in the table.
After loading data and creating indexes, the .rocksdb data
directory is 17GB in size.
A full table scan "count(*)" query takes less than four minutes,
sometimes reading over 1M rows per second, but when scanning the
index to accomplish the same count, the database can only scan
around 2000 rows per second. The four minute query would take an
estimated 1000 minutes, a 250x difference.
I have eliminated the type of CRC32 function (SSE vs non-SSE) by
forcing the hardware SSE function by patching the code.
There seem to be problems with any queries …
The details on this issue are here:
https://github.com/facebook/mysql-5.6/issues/369
This test is very simple. I loaded the SSB (star schema
benchmark) data for scale factor 20 (12GB raw data), added
indexes, and tried to count the rows in the table.
After loading data and creating indexes, the .rocksdb data
directory is 17GB in size.
A full table scan "count(*)" query takes less than four minutes,
sometimes reading over 1M rows per second, but when scanning the
index to accomplish the same count, the database can only scan
around 2000 rows per second. The four minute query would take an
estimated 1000 minutes, a 250x difference.
I have eliminated the type of CRC32 function (SSE vs non-SSE) by
forcing the hardware SSE function by patching the code.
There seem to be problems with any queries …
So I tried to do my first set of benchmarks and testing on
RocksDB today, but I ran into a problem and had to file a
bug:
https://github.com/facebook/mysql-5.6/issues/365
MySQL @ Facebook RocksDB appears to store at least 2x the size of
the volume of changes in a transaction. I don't know how much
space for the row + overhead there is in each transcation, so I'm
just going to say 2x the raw size of the data changed in the
transaction, as approximation. I am not sure how this works for
updates either, that is whether old/new row information is
maintained. If old/row data is maintained, then a pure update
workload you would need 4x the ram for the given transactional
changes. My bulk load was 12GB of raw data, so it failed as I
have only 12GB of RAM in my test system.
The workaround (as suggested in the bug) is to set two
configuration …
So I tried to do my first set of benchmarks and testing on
RocksDB today, but I ran into a problem and had to file a
bug:
https://github.com/facebook/mysql-5.6/issues/365
MySQL @ Facebook RocksDB appears to store at least 2x the size of
the volume of changes in a transaction. I don't know how much
space for the row + overhead there is in each transcation, so I'm
just going to say 2x the raw size of the data changed in the
transaction, as approximation. I am not sure how this works for
updates either, that is whether old/new row information is
maintained. If old/row data is maintained, then a pure update
workload you would need 4x the ram for the given transactional
changes. My bulk load was 12GB of raw data, so it failed as I
have only 12GB of RAM in my test system.
The workaround (as suggested in the bug) is to set two
configuration …
MySQL configuration management remains a hot topic, as I’ve noticed on numerous occasions during my conversations with customers.
I thought it might be a good idea to start a blog series that goes deeper in detail into some of the different options, and what modules potentially might be used for managing your MySQL database infrastructure.
Configuration management has been around since way before the beginning of my professional career. I, myself, originally began working on integrating an infrastructure with my colleagues using Puppet.
Why is configuration management important?
- Reproducibility. It’s giving us the ability to provision any environment in an automated way, and feel sure that the new environment will contain …
Our system continuously tests our ability to restore our databases from backups, ensuring that we can quickly and reliably recover from an outage.
After the heady excitement of getting my first MySQL Cluster 7.5.4 set up nicely running in docker, I quickly discovered that I wanted to re-factor most of it, implement the bits I’d left out, and extend it more to meet some of my other testing needs, like being able to run multiple deployments of similar types in parallel for simple CI.
I’ve now released this as v1.0.
The output is a little different to before, but now it’s possible to set up multiple clusters, of different shapes if you like, on different docker networks. You simply provide a unique value for the new –base-network and –name parameters when using the …
[Read more]
MySQL Connector/Net 7.0.6 is the third development release
that expands cross-platform support to Linux and OS X
when using Microsoft’s .NET Core framework. Now,
.NET developers can use the X DevAPI with .NET Core
and Entity Framework Core
(EF Core) 1.0 to create server applications that run on
Windows, Linux and OS X.
We are very excited about this change and really look forward to
your feedback on it!
MySQL Connector/Net 7.0.6 is also the fifth development release
of MySQL Connector/Net to add support for the new X DevAPI.
The X DevAPI enables
application developers to write code that combines the
strengths of the
relational and document models using a modern, NoSQL-like
syntax that
does not assume previous experience writing traditional SQL.
To learn more about how to write applications using the X DevAPI,
see
…
Today I delivered a session related on what is MySQL implementing to take the make the devops life easier.
You can find the slides below:
Thanks to everyone who participated in this week’s webinar on working with optimizer and SQL tuning. In this session, Krzysztof Książek, Senior Support Engineer at Severalnines, discussed how execution plans are calculated. He also took a closer look at InnoDB statistics, how to hint the optimizer and finally, how to optimize SQL.
The complete MySQL Query Tuning Trilogy is available to watch online, so if you missed the first two parts, you can now catch up with them on demand.
MySQL Query Tuning Trilogy
An in-depth look into the ins and outs of optimising MySQL queries
When done right, tuning MySQL queries and indexes can significantly increase the performance of your application as well as decrease response times. This is why we’ve covered this complex …
[Read more]