Showing entries 7066 to 7075 of 44045
« 10 Newer Entries | 10 Older Entries »
Dockerizing MySQL at Uber Engineering

Uber Engineering’s Schemaless storage system powers some of the biggest services at Uber, such as Mezzanine. Schemaless is a scalable and highly available datastore on top of MySQL¹ clusters. Managing these clusters was fairly easy when we had …

The post Dockerizing MySQL at Uber Engineering appeared first on Uber Engineering Blog.

Playing with MySQL Source code; Adding “STOP ALL SLAVES” command

This blog post is a result of reading book.
To be clear -> “Expert MySQL” by Charles Bell -> http://www.apress.com/gp/book/9781430246596
This book explain internals of MySQL based on 5.6.x version. So relatively new and really good resource to use.

So the task is:
Chapter 8, “Extending MySQL High Availability” -> page 310, “Extending Replication”.
Where we should add new command -> “SQLCOM_STOP_SLAVES” executed on master and causing to STOP IO_THREAD on all slaves.
The basic idea is to run “STOP ALL SLAVES” on master, which in turn will appear on binary log and then it will applied on slaves.

Great
MySQL version is 5.6.32.

So, as book advice go to sql/lex.h. “We are adding the new symbol for the command. This file contains the symbol array stored in …

[Read more]
MyRocks: migrating a large MySQL dataset from InnoDB to RocksDB to reduce footprint

I have been following Facebook's MyRocks project (and Mark Callaghan's blog) for a long time. The idea of an LSM based engine for MySQL is actually a great idea.
We all know that InnoDB sucks at INSERTs.  BTree in general sucks when it's about insertion speed, and the more rows you insert, the more it sucks at it. There are many blog posts on the web that shows the insert speed degradation in InnoDB when the amount of rows in the table grows. Things get much worse faster if your primary key is a random key, for example an UUID.
We hit this problem with our caching servers (yes, we do caching with MySQL!), and in order to be able to scale these servers up we moved since a couple years to the TokuDB engine with great success. TokuDB is based on fractal tree technology, and guarantees the same insert speed, no matter the number of rows you have in the table; furthermore, it …

[Read more]
MyRocks has some strange performance issues for index scans

The details on this issue are here:
https://github.com/facebook/mysql-5.6/issues/369

This test is very simple. I loaded the SSB (star schema benchmark) data for scale factor 20 (12GB raw data), added indexes, and tried to count the rows in the table.

After loading data and creating indexes, the .rocksdb data directory is 17GB in size.

A full table scan "count(*)" query takes less than four minutes, sometimes reading over 1M rows per second, but when scanning the index to accomplish the same count, the database can only scan around 2000 rows per second. The four minute query would take an estimated 1000 minutes, a 250x difference.

I have eliminated the type of CRC32 function (SSE vs non-SSE) by forcing the hardware SSE function by patching the code.

There seem to be problems with any queries …

[Read more]
MyRocks has some strange performance issues for index scans

The details on this issue are here:
https://github.com/facebook/mysql-5.6/issues/369

This test is very simple. I loaded the SSB (star schema benchmark) data for scale factor 20 (12GB raw data), added indexes, and tried to count the rows in the table.

After loading data and creating indexes, the .rocksdb data directory is 17GB in size.

A full table scan "count(*)" query takes less than four minutes, sometimes reading over 1M rows per second, but when scanning the index to accomplish the same count, the database can only scan around 2000 rows per second. The four minute query would take an estimated 1000 minutes, a 250x difference.

I have eliminated the type of CRC32 function (SSE vs non-SSE) by forcing the hardware SSE function by patching the code.

There seem to be problems with any queries …

[Read more]
RocksDB doesn't support large transactions very well

So I tried to do my first set of benchmarks and testing on RocksDB today, but I ran into a problem and had to file a bug:
https://github.com/facebook/mysql-5.6/issues/365

MySQL @ Facebook RocksDB appears to store at least 2x the size of the volume of changes in a transaction. I don't know how much space for the row + overhead there is in each transcation, so I'm just going to say 2x the raw size of the data changed in the transaction, as approximation. I am not sure how this works for updates either, that is whether old/new row information is maintained. If old/row data is maintained, then a pure update workload you would need 4x the ram for the given transactional changes. My bulk load was 12GB of raw data, so it failed as I have only 12GB of RAM in my test system.

The workaround (as suggested in the bug) is to set two configuration …

[Read more]
RocksDB doesn't support large transactions very well

So I tried to do my first set of benchmarks and testing on RocksDB today, but I ran into a problem and had to file a bug:
https://github.com/facebook/mysql-5.6/issues/365

MySQL @ Facebook RocksDB appears to store at least 2x the size of the volume of changes in a transaction. I don't know how much space for the row + overhead there is in each transcation, so I'm just going to say 2x the raw size of the data changed in the transaction, as approximation. I am not sure how this works for updates either, that is whether old/new row information is maintained. If old/row data is maintained, then a pure update workload you would need 4x the ram for the given transactional changes. My bulk load was 12GB of raw data, so it failed as I have only 12GB of RAM in my test system.

The workaround (as suggested in the bug) is to set two configuration …

[Read more]
Blog Series: MySQL Configuration Management

MySQL configuration management remains a hot topic, as I’ve noticed on numerous occasions during my conversations with customers.

I thought it might be a good idea to start a blog series that goes deeper in detail into some of the different options, and what modules potentially might be used for managing your MySQL database infrastructure.

Configuration management has been around since way before the beginning of my professional career. I, myself, originally began working on integrating an infrastructure with my colleagues using Puppet.

Why is configuration management important?

  • Reproducibility. It’s giving us the ability to provision any environment in an automated way, and feel sure that the new environment will contain …
[Read more]
Continuous MySQL backup validation: Restoring backups

Our system continuously tests our ability to restore our databases from backups, ensuring that we can quickly and reliably recover from an outage.

Running MySQL Cluster 7.5 in Docker, part 2

After the heady excitement of getting my first MySQL Cluster 7.5.4 set up nicely running in docker, I quickly discovered that I wanted to re-factor most of it, implement the bits I’d left out, and extend it more to meet some of my other testing needs, like being able to run multiple deployments of similar types in parallel for simple CI.

I’ve now released this as v1.0.

The output is a little different to before, but now it’s possible to set up multiple clusters, of different shapes if you like, on different docker networks. You simply provide a unique value for the new –base-network and –name parameters when using the …

[Read more]
Showing entries 7066 to 7075 of 44045
« 10 Newer Entries | 10 Older Entries »