I’m always switching back-and-forth between the 2 different InnoDB flavors in MariaDB – XtraDB+ and the standard InnoDB plugin, so I thought I’d simply post all of the various combinations in a single place. (And then I cover enabling the InnoDB Plugin in MySQL, since it’s an option in 5.1.) [Addition: Thanks to Andrew and Sergei for the tips on shortening plugin-load=. The changes are reflected below.]
Note: Below is for Windows. For Linux, simply change “.dll” to “.so” where appropriate.
Do not add anything, as the standard InnoDB plugin is the current default (as of 10.0.3, although I do anticipate this changing in the near future, and I’ll update the post accordingly when that happens).
# Enable the 2 below to disable XtraDB+ and enable the standard InnoDB[Read more...]
The two conservative ways of getting the number of rows in an InnoDB table are:
You can get a good estimate by calculating the total number of rows in steps. Walk the table 1,000 rows[Read more...]
Or: how to make it work in TokuDB version 7.0.1. This is a follow up on a discussion on the tokudb-user group.
I wanted to test TokuDB's compression. I took a staging machine of mine, with production data, and migrated it from Percona Server 5.5 To MariaDB 5.5+TokuDB 7.0.1. Migration went well, no problems.
To my surprise, when I converted tables from InnoDB to TokuDB, I saw an increase in table file size on disk. As explained by Tim Callaghan, this was due to TokuDB interpreting my compressed table's "KEY_BLOCK_SIZE=4" as an instruction for TokuDB's page size. TokuDB should be using 4MB block size, but thinks it's being instructed to use[Read more...]
Transaction locks are an important feature of any transactional storage engine. There are two types of transaction locks – table locks and row locks. Table locks are used to avoid a table being altered or dropped by one transaction when another transaction is using the table. It is also used to prohibit a transaction from accessing a table, when it is being altered. InnoDB supports multiple granularity locking (MGL). So to access rows in a table, intention locks must be taken on the tables.
Row locks are at finer granularity than table level locks, different threads can work on different parts of the table without interfering with each other. This is in contrast with MyISAM where the entire table has to be locked when updating even unrelated rows. Having[Read more...]
Shard-Query inserts data into a “coordinator” table when answering queries. When there is a GROUP BY on the original query, the coordinator table contains a UNIQUE KEY over the GROUP BY attributes. Shard-Query uses INSERT .. ON DUPLICATE KEY UPDATE in combination with bulk insert (insert into … values (),(),() ) when inserting into the table.
For what would normally be efficiency sake, Shard-Query sends queries to the shards using ORDER BY NULL which disables the filesort operation. Of course, this often results in the rows being sent back from the shards in random order.
Because the results are in random order, the bulk insertion that the worker does into the coordinator table can deadlock with other worker threads when using InnoDB or TokuDB as the coordinator table. Right now I’ve just been using MyISAM for the[Read more...]
MySQL can optimize aggregate functions like MIN and MAX as long as the columns specified are indexed. This means that, in the case of MIN and MAX, the optimizer should be able to identify the highest and lowest values of an indexed column from the B-Tree index. Say I have a table like below:
CREATE TABLE `history` ( `h_id` bigint(20) unsigned NOT NULL AUTO_INCREMENT, `u_id` int(10) unsigned NOT NULL, `cn_id` int(10) unsigned NOT NULL, `f_id` int(10) unsigned NOT NULL PRIMARY KEY (`h_id`) ) ENGINE=InnoDB
If I want to get the MAX value for cn_id, I’d to a query like this which will be a full table scan:
mysql (test) > EXPLAIN SELECT MAX(cn_id) FROM history \G *************************** 1. row *************************** id: 1 select_type: SIMPLE table: history[Read more...]
Here are results for Shard-Query 2.0 Beta 1* on the Star Schema Benchmark at scale factor 10. In the comparison below the “single threaded” response times for InnoDB are the response times reported in my previous test which did not use Shard-Query.
Shard-Query has been configured to use a single host. The Shard-Query configuration repository is stored on the host. Gearman is also running on the host, as are the Gearman workers. In short, only one host is involved in the testing.
Following are benchmark results comparing Tokutek TokuDB and Percona XtraDB at scale factor 10 on the Star Schema benchmark. I’m posting this on the Shard-Query blog because I am going to compare the performance of Shard-Query on the benchmark on these two engines. First, however, I think it is important to see how they perform in isolation without concurrency.
Because I am going to be testing Shard-Query, I have chosen to partition the “fact” table (lineorder) by month. I’ve attached the full DDL at the end of the post as well as the queries again for reference.
I want to note a few things about the results:
First and foremost, TokuDB was configured to use quicklz compression (the default) and InnoDB compression was not used. No tuning of TokuDB was performed, which means it will use up to 50% of memory by
This blog post is part two in what is now a continuing series on the Star Schema Benchmark.
In my previous blog post I compared MySQL 5.5.30 to MySQL 5.6.10, both with default settings using only the InnoDB storage engine. In my testing I discovered that innodb_old_blocks_time had an effect on performance of the benchmark. There was some discussion in the comments and I promised to follow up with more SSB tests at a later date.
I also promised more low concurrency SSB tests when Peter blogged about the importance of performance at low concurrency.
The SSB tests a
mysql> SELECT CONCAT(T.TABLE_SCHEMA,'.',T.TABLE_NAME) AS TABLE_NAME,[Read more...]
-> P.PARTITION_NAME AS PART,IBT.SPACE,IBD.PATH,T.DATA_FREE AS T_DATA_FREE,
-> P.DATA_FREE AS P_DATA_FREE FROM
SELECT DATA_FREE FROM INFORMATION_SCHEMA.TABLES WHERE ENGINE='InnoDB' LIMIT 1;
The Example storage engine is meant to serve mainly as a code example of the stub of a storage engine for example purposes only (or so the code comment at the start of ha_example.cc reads). In reality however, it’s not very useful. It likely was back in 2004 when it could be used as a starting point for starting some simple new engines (my guess would be that more than a few of the simpler engines started from ha_example.cc).
The sad reality is the complexity of the non-obviousness of the bits o the storage engine API you actually care about are documented in ha_ndbcluster.cc, ha_myisam.cc and ha_innodb.cc. If you’re doing something that isn’t already done by one of those three engines: good luck.
Whenever I looked at ha_example.cc I always wished there was something[Read more...]
I wonder how much longer the ARCHIVE storage engine is going to ship with MySQL…. I think I’m the last person to actually fix a bug in it, and that was, well, a good number of years ago now. It was created to solve a simple problem: write once read hardly ever. Useful for logs and the like. A zlib stream of rows in a file.
You can actually easily beat ARCHIVE for INSERT speed with a non-indexed MyISAM table, and with things like TokuDB around you can probably get pretty close to compression while at the same time having these things known as “indexes”.
ARCHIVE for a long time held this niche though and was widely and quietly used (and likely still is). It has the great benefit of being fairly lightweight – it’s only about 2500 lines of code (1130 if[Read more...]
At tomorrow’s Effective MySQL Meetup, I’ll be presenting “Fractal Tree Indexes : Theory and Practice (MySQL and MongoDB).” The meetup is at 6:30pm Tuesday, May 14, 2013, and will be held at Alley NYC in New York City.
I’ll give an overview on how Fractal Tree® indexes work, and then get into specific product features that Fractal Trees enable in MySQL and MongoDB. Some benchmarking and customer use-cases will be discussed, but my intent is for this to be a deep technical dive. Several Tokutek Engineers will also be on hand, so bring any questions you’ve got.
I hope to see you there!
Percona is glad to announce the release of Percona XtraBackup 2.1.0-rc1 on May 7, 2013. Downloads are available from our download site here. For this RC release, we will not be making APT and YUM repositories available, just base deb and RPM packages
This is an Release Candidate quality release and is not intended for production. If you want a high-quality, generally available release, the current stable version should be used (currently[Read more...]
After compiling Percona Server with TokuDB, of course I wanted to compare InnoDB performance vs TokuDB.
I have a particular workload I’m interested in testing – it is an insert-intensive workload (which is TokuDB’s strong suit) with some roll-up aggregation, which should produce updates in-place (I will use INSERT .. ON DUPLICATE KEY UPDATE statements for that), so it will produce all good amount of reads.
A few words about the hardware: I am going to use new the Dell PowerEdge R420 with two Intel(R) Xeon(R) CPU E5-2450 0 @ 2.10GHz, 48GB of RAM and SATA SSD: Kingston HyperX 3K 240 GB.
Workload: I will use two different schemas. The first schema is from sysbench, and
How fast is
COUNT() execution? Well, it depends from the Storage Engine.
Try to create an
INSERT some data, and execute an
EXPLAIN similar to the following:
MariaDB [(none)]> EXPLAIN SELECT COUNT(*) FROM test.t1; +------+-------------+-------+------+---------------+------+--------+------+------+------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +------+-------------+-------+------+---------------+------+--------+------+------+------------------------------+ | 1 | SIMPLE | NULL | NULL | NULL | NULL | NULL | NULL | NULL |
Percona XtraBackup 2.0.7 was released May 6.
Percona is glad to announce the release of Percona XtraBackup 2.0.7 for MySQL on May 6, 2013. Downloads are available from our download site here and Percona Software Repositories. Percona XtraBackup is the world’s only open-source, free MySQL hot backup software that performs non-blocking backups for InnoDB and XtraDB databases.
This release is the current GA (Generally Available) stable release in the 2.0[Read more...]
In my ongoing quest to completely understand InnoDB’s data storage, I came across a quite small and inconsequential waste, which is nevertheless fun to write about. I noticed the following block of pages which were allocated very early in the ibdata1 system tablespace but apparently unused (unnecessary lines removed from output):
$ innodb_space -f ibdata1 space-page-type-regions start end count type 13 44 32 ALLOCATED
Most people using InnoDB have heard of the “doublewrite buffer”—part of InnoDB’s page flushing strategy. The doublewrite buffer is used as a “scratch area” to write (by[Read more...]
This afternoon, Arjen Lentz and I were discussing InnoDB’s behavior without a declared PRIMARY KEY, and the topic felt interesting enough and undocumented enough to warrant its own short post.
In The physical structure of InnoDB index pages I described how “Everything is an index in InnoDB”. This means that InnoDB must always have a “cluster key” for each table, which is normally the PRIMARY KEY. The manual has this to say in Clustered and Secondary Indexes:
If the table has no PRIMARY KEY or suitable UNIQUE index, InnoDB internally generates a hidden clustered
Somebody asked why you can’t implement MySQL triggers that write information when you want to stop the DML statement, like autonomous procedures in Oracle. The question was a surprise but I didn’t find anything on it, so here’s how you can do it. This is more or less like an autonomous process by leveraging both the InnoDB and MyISAM engine’s behaviors. This post leverages an earlier explanation of MySQL Triggers.
CREATE TABLE logger ( logger_id INT UNSIGNED AUTO_INCREMENT PRIMARY KEY , logger_event VARCHAR(50) ,[Read more...]
I’ve started poking around the MySQL 5.7.1 source tree (although just from tarball as I don’t see a BZR tree yet). I thought I’d share a few thoughts:
One of the first things we do when preparing a client’s infrastructure for Galera deployment is see whether their schema is suitable.
Naturally, checking a schema in the server is more effective than going through other sources and possibly missing bits. On the downside, the only viable way to get this info out of MariaDB[Read more...]
Even frequent visitors to bugs.mysql.com can sometimes miss the little note in the bottom right corner of each page:
Page generated in 0.017 sec. using MySQL 5.6.11-enterprise-commercial-advanced-log
That text changed this past weekend, going from MySQL Enterprise 5.6.10 to 5.6.11. But more importantly, the collection of MyISAM tables which support the bugs system were also converted to InnoDB. There’s a little story to tell here about eating this particular helping of dogfood which also amplifies changelog comments, so here it is:
We like to keep bugs.mysql.com on a current release of MySQL, and
To recover a dropped or corrupt table with Percona Data Recovery Tool for InnoDB you need two things: media with records(ibdata1, *.ibd, disk image, etc.) and a table structure. Indeed, there is no information about the table structure in an InnoDB page. Normally we either recover the structure from .frm files or take it from some old backup.
A new tool
sys_parser can recover the table structure from InnoDB dictionary.
Why do we need a new tool anyway? It is absolutely critical to have an accurate table definition to ensure a successful recovery. Even an unnoticeable difference like NULL or NOT NULL can shift all[Read more...]