Home |  MySQL Buzz |  FAQ |  Feeds |  Submit your blog feed |  Feedback |  Archive |  Aggregate feed RSS 2.0 English Deutsch Español Français Italiano 日本語 Русский Português 中文
Showing entries 1 to 11

Displaying posts with tag: iibench (reset)

Three Ways that Fractal Tree Indexes Improve SSD for MySQL
+1 Vote Up -0Vote Down

Since Fractal Tree indexes turn random writes into sequential writes, it’s easy to see why they offer a big advantage for maintaining indexes on rotating disks. It turns out that that Fractal Tree indexing also offers signficant advantages on SSD. Here are three ways that Fractal Trees improve your life if you use SSDs.

Advantage 1: Index maintenence performance.

The results below show the insertion of 1 billion rows into a table while maintaining three multicolumn secondary indexes. At the end of the test, TokuDB’s insertion rate remained at 14,532 inserts/second whereas InnoDB had dropped to 1,607 inserts/second. That’s a difference of over 9x.

Platform: Centos 5.6; 2x Xeon L5520; 72GB RAM; LSI  [Read more...]
Announcing TokuDB v6.5: Optimized for Flash
+2 Vote Up -0Vote Down

We are excited to announce TokuDB® v6.5, the latest version of Tokutek’s flagship storage engine for MySQL and MariaDB.

This version offers optimization for Flash as well as more hot schema change operations for improved agility.

We’ll be posting more details about the new features and performance, so here’s an overview of what’s in store.

Flash TokuDB v6.5 continues the great Toku-tradition of fast insertions. On flash drives, we show an order-of-magnitude (9x) faster insertion rate than InnoDB. TokuDB’s standard compression works just as well on flash and helps you get the most out of your  [Read more...]
Benchmarking single-row insert performance on Amazon EC2
+0 Vote Up -0Vote Down

I have been working for a customer benchmarking insert performance on Amazon EC2, and I have some interesting results that I wanted to share. I used a nice and effective tool iiBench which has been developed by Tokutek. Though the “1 billion row insert challenge” for which this tool was originally built is long over, but still the tool serves well for benchmark purposes.

OK, let’s start off with the configuration details.

Configuration

First of all let me describe the EC2 instance type that I used.

EC2 Configuration

I chose m2.4xlarge instance as that’s the instance type with highest memory available, and memory is what really really matters.

High-Memory Quadruple Extra Large
  [Read more...]
1 Billion Insertions – The Wait is Over!
+4 Vote Up -0Vote Down

iiBench measures the rate at which a database can insert new rows while maintaining several secondary indexes. We ran this for 1 billion rows with TokuDB and InnoDB starting last week, right after we launched TokuDB v5.2. While TokuDB completed it in 15 hours, InnoDB took 7 days.

The results are shown below. At the end of the test, TokuDB’s insertion rate remained at 17,028 inserts/second whereas InnoDB had dropped to 1,050 inserts/second. That is a difference of over 16x. Our complete set of benchmarks for TokuDB v5.2 can be found here.

  [Read more...]
Compression Benchmarking: Size vs. Speed (I want both)
+3 Vote Up -0Vote Down

I’m creating a library of benchmarks and test suites that will run as part of a Continuous Integration (CI) process here at Tokutek. My goal is to regularly measure several aspects of our storage engine over time: performance, correctness, memory/CPU/disk utilization, etc. I’ll also be running tests against InnoDB and other databases for comparative analysis. I plan on posting a series of blog entries as my CI framework evolves, for now I have the results of my first benchmark.

Compression is an always-on feature of TokuDB. There are no server/session variables to enable compression or change the compression level (one goal of TokuDB is to have as few tuning parameters as possible). My compression benchmark uses

  [Read more...]
High Insertion Rates into a TokuDB Table with Durable Transactions
+4 Vote Up -0Vote Down

We recently made transactions in TokuDB 3.0 durable. We write table changes into a log file so that in the event of a crash, the table changes up to the last checkpoint can be replayed. Durability requires the log file to be fsync’ed when a transaction is committed. Unfortunately, fsync’s are not free, and may cost 10’s of milliseconds of time. This may seriously affect the insertion rate into a TokuDB table. How can one achieve high insertion rates in TokuDB with durable transactions?

Decrease the fsync cost

The fsync of the TokuDB log file writes all of the dirty log file data that is cached in memory by the operating system to the underlying storage system. The fsync time can be modeled with a simple linear equation: fsync time = N/R + K, where N is the amount of dirty data


  [Read more...]
OpenSQLCamp Videos online!
+2 Vote Up -0Vote Down

OpenSQLCamp was a huge success! I took videos of most of the sessions (we only had 3 video cameras, and 4 rooms, and 2 sessions were not recorded). Unfortunately, I was busy doing administrative stuff for opensqlcamp for the opening keynote and first 15 minutes of the session organizing, and when I got to the planning board, it was already full….so I was not able to give a session.

  [Read more...]
OpenSQLCamp Lightning Talk Videos
+3 Vote Up -0Vote Down

OpenSQLCamp was a huge success! Not many folks have blogged about what they learned there….if you missed it, all is not lost. We did take videos of most of the sessions (we only had 3 video cameras, and 4 rooms, and 2 sessions were not recorded).

All the videos have been processed, and I am working on uploading them to YouTube and filling in details for the video descriptions. Not all the videos are up right now….right now all the lightning talks are up.


  [Read more...]
InnoDB purge - another potential performance problem
+0 Vote Up -0Vote Down
If you have delete-intensive workloads on InnoDB, then you need to understand how purge works. Dimitri has an interesting post on this. And I wrote about measuring purge lag. I haven't had to deal with this problem yet, but the insert benchmark has a new option to make it delete intensive. So, I think I can reproduce workloads that generate a lot of purge lag.
Performance impact of prefetching in InnoDB
+1 Vote Up -0Vote Down
InnoDB prefetches blocks when it detects multiple accesses to blocks within an extent. Unfortunately, there are no metrics in the server to determine whether it is effective. There are also weak metrics in the server to determine how frequently it is done -- counters incremented each time the readahead code prefetches one or more blocks rather than once per prefetch request.

There are cases where prefetch improves performance. A query that does a full table scan was run with prefetch enabled and disabled. It was 35% slower with prefetch disabled.

Percona and Matt have written about potential performance problems from this



  [Read more...]
InnoDB checksum performance
+0 Vote Up -0Vote Down
Once again Domas is unhappy with some aspect of Innodb performance and doing crazy things with gdb to tune it. I made it faster by changing the checksum code to process one 32-bit word at a time rather than one byte at a time. This will be in a future Google patch and is enabled with the parameter innodb_fast_checksum. This is not compatible with the old checksum so you must dump and reload the database to use it.

I measured the benefit using the insert benchmark from Tokutek on a server that can do a lot of IO. CPU overheads are measured using oprofile. The data below lists the percentage of time for the top 4 functions in mysqld. The checksum is computed in buf_calc_page_new_checksum. By using the fast checksum, the checksum overhead drops from 33.6%

  [Read more...]
Showing entries 1 to 11

Planet MySQL © 1995, 2014, Oracle Corporation and/or its affiliates   Legal Policies | Your Privacy Rights | Terms of Use

Content reproduced on this site is the property of the respective copyright holders. It is not reviewed in advance by Oracle and does not necessarily represent the opinion of Oracle or any other party.