When InnoDB compresses a page it needs the result to fit into its
predetermined compressed page size (specified with
KEY_BLOCK_SIZE). When the result does not fit we call that a
compression failure. In this case InnoDB needs to split up the
page and try to compress again. That said, compression failures
are bad for performance and should be minimized.
Whether the result of the compression will fit largely depends on
the data being compressed and some tables and/or indexes may
contain more compressible data than others. And so it would be
nice if the compression failure rate, along with other
compression stats, could be monitored on a per table or even on a
per index basis, wouldn't it?
This is where the new INFORMATION_SCHEMA table in MySQL 5.6 kicks
in. INFORMATION_SCHEMA.INNODB_CMP_PER_INDEX provides exactly this
helpful information. It contains the following fields:
TokuDB v6.5 adds the ability to expand certain column types without downtime. Users can now enlarge char, varchar, varbinary, and integer columns with no interruption to insert/update/delete statements on the altered table. Prior to this feature, enlarging one of these column types required a full table rebuild. InnoDB blocks all insert/update/delete operations to a table during column expansion as it rebuilds the table and all indexes.
We recently added SSDs to our existing servers (HP DL380 servers with internal RAID arrays). I opted for LSI controllers (LSI 9285-8e and LSI 9280-4i4e), Samsung 830 SSDs, and a Sans Digital AccuRaid AS108X enclosure. I was unable to locate much in the way of optimizing Linux for SSD/RAID performance and plan on blogging my findings in the future. However, the out of the box performance is seriously fast.
I was curious to see how much downtime occurs when issuing …
[Read more]Since Fractal Tree indexes turn random writes into sequential writes, it’s easy to see why they offer a big advantage for maintaining indexes on rotating disks. It turns out that that Fractal Tree indexing also offers signficant advantages on SSD. Here are three ways that Fractal Trees improve your life if you use SSDs.
Advantage 1: Index maintenence performance.
The results below show the insertion of 1 billion rows into a table while maintaining three multicolumn secondary indexes. At the end of the test, TokuDB’s insertion rate remained at 14,532 inserts/second whereas InnoDB had dropped to 1,607 inserts/second. That’s a difference of over 9x.
| Platform: Centos 5.6; 2x Xeon L5520; 72GB RAM; LSI MegaRaid 9285; 2x 256GB Samsung 830 in RAID0. |
Even on flash, I/O performance costs something. Since TokuDB employs Fractal Tree write-optimized …
[Read more]Ah, the sometimes bitter pill of experience.
Recently I got "the call" about a server that was having problems. I began standard troubleshooting procedures but didn't see anything abnormal. After about 15 minutes of getting irritated I opened up the my.cnf and looked through it. Imagine my suprise when I saw the innodb_log_file_size parameter was set to 5M (five megabytes). Even though it was hard-coded in the my.cnf, this is actually the default size (at least through MySQL 5.1). A log file size of five megabytes is pitifully small and was proving to be a severe bottleneck for the system. The two log files were being , filled up and flushed more frequently than once a second.
Once the source of the trouble was discovered it was a simple matter of filing a RFC and resolving the issue. We increased the log file size to 2000 megabytes so flushing of the transaction logs occured when the server had the time/resources and not …
[Read more]We are excited to announce TokuDB® v6.5, the latest version of Tokutek’s flagship storage engine for MySQL and MariaDB.
This version offers optimization for Flash as well as more hot schema change operations for improved agility.
We’ll be posting more details about the new features and performance, so here’s an overview of what’s in store.
- Flash
- TokuDB v6.5 continues the great Toku-tradition of fast insertions. On flash drives, we show an order-of-magnitude (9x) faster insertion rate than InnoDB. TokuDB’s standard compression works just as well on flash and helps you get the most out of your storage system. And TokuDB reduces wear …
I will give 2 talks at MySQL
Connect
1. New MySQL Full-Text Search Features and Solutions, where I
will focus on the new (and very promising!) InnoDB full text
search. I’ve done some benchmarks recently and will publish it
here.
2. In-Depth Query Optimization for MySQL, where we will work on
the real word examples of MySQL query tuning.
*SESSION SCHEDULE INFORMATION*
Session ID: CON9283
Session Title: New MySQL Full-Text Search Features and
Solutions
Venue / Room: Hilton San Francisco – Golden Gate 8
Date and Time: 9/29/12, 14:30 – 15:30
Session ID: CON8811
Session Title: In-Depth Query Optimization for MySQL
Venue / Room: Hilton San Francisco – Golden Gate 8
Date and Time: 9/30/12, 14:45 – 15:45
…
[Read more]
In this blog post I will show you how to setup a replication from
MySQL Cluster (ndbcluster) to a regular MySQL Server
(InnoDB). If you want to understand the concepts, check out part
7 of our free MySQL Cluster training.
First of all we start with a MySQL Cluster looking like this, and
what we want to do is to setup replication server to the
Reporting Server (InnoDB slave).
MySQL Cluster is great at scaling large numbers of write
transactions or shorter key-based read querise, but not so good
at longer reporting or analytical queries. I generally recommend
people to limit analytical/reporting queries on the MySQL
Cluster, in order to avoid slowing down the …
If you’re anything like me, your initial reaction upon hearing about transportable tablespaces for InnoDB in 5.6 was to imagine it like MyISAM, where you can copy the .frm, .myi and .myd files around to your heart’s content, and everything will be great. You might have read Sunny’s excellent blog, and realized that there’s a bit more to it than that – you have to explicitly prepare the tablespace for copying using FLUSH TABLES FOR EXPORT. That’s perfectly acceptable for the bulk of use cases, such as single-table InnoDB backups, and opens up exciting new possibilities for moving or copying InnoDB data at the filesystem level.
But for situations where the need is a little different, you might really dug into it and start to wonder about the .cfg files produced during the FLUSH TABLES FOR EXPORT operation. …
[Read more]Back when MySQL 5.1 was first released, Oracle (which didn’t at the time own Sun or MySQL) wanted to add new InnoDB functionality faster than MySQL could effectively incorporate it into the server, and the InnoDB plugin was introduced. It provided 5.1 users the option of deploying a more advanced version of InnoDB, or using the more conservative built-in version baked into MySQL. By the time the plugin reached GA status, though, Oracle had announced the acquisition of Sun (and MySQL), and things started to change. The coordination between the InnoDB development team at Oracle and the MySQL development team increased once we were all “under one roof”. The strategic directions of InnoDB and MySQL were aligned, and the results can be seen in the tremendous work done for MySQL 5.5 and 5.6. In fact, the InnoDB plugin was removed …
[Read more]
In relation to these two posts from Justin Swanhart and Anders Karlsson about transaction isolation
levels, I thought it was interesting to do a little survey
to get an idea of the most commonly used isolation levels.
Note: There is a poll embedded within this post, please visit the site to participate in this post's poll. Related Posts :
[Read more]