Showing entries 1 to 4
Displaying posts with tag: libeatmydata (reset)
Optimizing InnoDB for creating 30,000 tables (and nothing else)

Once upon a time, it would have been considered madness to even attempt to create 30,000 tables in InnoDB. That time is now a memory. We have customers with a lot more tables than a mere 30,000. There have historically been no tests for anything near this many tables in the MySQL test suite.

So, in fleshing out the test cases for this and innodb_dict_size_limit I was left with the not so awesome task of making the test case run in remotely reasonable time. The test case itself is pretty simple, a simple loop in the not at all exciting mysqltest language that will create 30,000 identical tables, insert a row into each of them and then drop them.

Establishing the ground rules: I do not care about durability. This is a test case, not a production system holding important data which means I can lie, cheat and steal to get …

[Read more]
libeatmydata - Feed me, Seymour!

Whilst supporting customers at SkySQL I often have to load gigabytes of SQL data into MySQL servers to run tests.  This process can be slow especially for InnoDB because in a standard dump file every insert is a transaction and every transaction has to be synchronised to disk for crash safety.  The thing is, most of the time I don't care if the machine I'm using crashes whilst I'm loading this data into the server.

There are of course many ways around this, such as editing the SQL files and wrapping transactions around batches of inserts and editing the configuration files to disable all the syncing involved.  But I don't want one configuration to load in data and then another to play with the data, so this is where libeatmydata comes in.

[Read more]
libeatmydata – Feed me, Seymour!

Whilst supporting customers at SkySQL I often have to load gigabytes of SQL data into MySQL servers to run tests. This process can be slow especially for InnoDB because in a standard dump file every insert is a transaction and every transaction has to be synchronised to disk for crash safety. The thing is, most of the time I don’t care if the machine I’m using crashes whilst I’m loading this data into the server.

There are of course many ways around this, such as editing the SQL files and wrapping transactions around batches of inserts and editing the configuration files to disable all the syncing involved. But I don’t want one configuration to load in data and then another to play with the data, so this is where libeatmydata comes in.

[Read more]
Using Dtrace to find out if the hardware or Solaris is slow (but really just working around the problem)

A little while ago, I was the brave soul tasked with making sure Drizzle was working properly and passing all tests on Solaris and OpenSolaris. Brian recently blogged about some of the advantages of also running on Solaris and the SunStudio compilers - more warnings from the compiler is a good thing. Many kudos goes to Monty Taylor for being the brave soul who fixed most of the compiler warnings (and for us, warnings=errors - so we have to fix them) for the SunStudio compilers before I got to making te tests work.

So, I got to the end of it all and got pointed to an OpenSolaris x86 box where the drizzleslap test was timing out. The timeout for tests is some amazingly long amount of time - 15 minutes. All the drizzle-test-run tests are rather short tests.

To make running the tests quick, I usually LD_PRELOAD …

[Read more]
Showing entries 1 to 4