Showing entries 311 to 320 of 366
« 10 Newer Entries | 10 Older Entries »
Displaying posts with tag: Benchmarks (reset)
Tuning for heavy writing workloads

For the my previous post, there was comment to suggest to test db_STRESS benchmark on XtraDB by Dimitri. And I tested and tuned for the benchmark. I will show you the tunings. It should be also tuning procedure for general heavy writing workloads.

At first, <tuning peak performance>. The next, <tuning purge operation> to stabilize performance  and to avoid decreasing performance.

<test condition>

Server:
PowerEdge R900, Four Quad Core E7320 Xeon, 2.13GHz, 32GB Memory, 16X2GB, 667MHz

db_STRESS:
32 sessions, RW=1, dbsize = 1000000, no thinktime

XtraDB: (mysql-5.1.39 + XtraDB-1.0.4-current)
innodb_io_capacity = 4000
innodb_support_xa = false
innodb_file_per_table = …

[Read more]
Rethinking B-tree block sizes on SSDs

One of the first questions to answer when running databases on SSDs is what B-tree block size to use. There are a number of factors that affect this decision:

  • The type of workload
  • I/O time to read and write the block size
  • The size of the cache

That’s a lot of variables to consider. For this blog post we assume a fairly common OLTP scenario – a database that’s dominated by random point queries. We will also sidestep some of the more subtle caching effects by treating the caching algorithm as perfectly optimal, and assuming the cost of lookup in RAM is insignificant.

Even with these restrictions it isn’t immediately obvious what is the optimal block size. Before discussing SSDs, let’s quickly address this problem on rotational drives. If we benchmark the number of IOPS for different block sizes on a typical rotation drive we get the following graph:

There are two …

[Read more]
Analyzing air traffic performance with InfoBright and MonetDB

Accidentally me and Baron played with InfoBright (see http://www.mysqlperformanceblog.com/2009/09/29/quick-comparison-of-myisam-infobright-and-monetdb/) this week. And following Baron's example I also run the same load against MonetDB. Reading comments to Baron's post I tied to load the same data to LucidDB, but I was not successful in this.

I tried to analyze a bigger dataset and I took public available data
http://www.transtats.bts.gov/DL_SelectFields.asp?Table_ID=236&DB_Short_Name=On-Time about USA domestic flights with information about flight length and delays.

The data is available from 1988 to 2009 in chunks per month, so I downloaded 252 files (for 1988-2008 years) with …

[Read more]
How number of columns affects performance ?

It is pretty understood the tables which have long rows tend to be slower than tables with short rows. I was interested to check if the row length is the only thing what matters or if number of columns we have to work with also have an important role. I was interested in peak row processing speed so I looked at full table scan in case data fits in OS cache completely. I created 3 tables - First containing single tinyint column which is almost shortest type possible (CHAR(0) could be taking less space), table with 1 tinyint column and char(99) column and table with 100 tinyint columns. The former two tables have the same row length but have number of column different 50 times. Finally I have created 4th table which is also 100 columns but one of them is VARCHAR causes raw format to be dynamic.

More specially:

PLAIN TEXT SQL:

  1. CREATE TABLE `t1` (
[Read more]
SSD Market Continues to Heat Up

I had originally posted this on the 16th of September, but I had been changing hosting providers and such and it has managed to drop through the cracks.  So, if you didn’t see it before here it is..

I have long held the opinon that SSD (Solid State Disk) drives are going to be a major part of the database future. I just checked and I wrote a blog posting about them two years ago. I am not alone in this opinion.  It has long been realized that both I/O access speed and throughput increases have not kept pace with the increases in CPU power and the steadily decreasing cost of RAM. Storage space has increased, but both access speed and throughput performance have only had marginal increases in performance.

Solid state disks have long held the promise of lowered access speeds, especially when it comes to random access.  Even so, prices for SSD drives have been high and space small (compared to standard hard …

[Read more]
Which adaptive should we use?

As you may know, InnoDB has 2 limits for unflushed modified blocks in the buffer pool. The one is from physical size of the buffer pool. And the another one is oldness of the block which is from the capacity of transaction log files.

In the case of heavy updating workload, the modified ages of the many blocks are clustered. And to reduce the maximum of the modified ages InnoDB needs to flush many of the blocks in a short time, if these are not flushed at all. Then the flushing storm affect the performance seriously.

We suggested the "adaptive_checkpoint" option of constant flushing to avoid such a flushing storm. And finally, the newest InnoDB Plugin 1.0.4 has the new similar option "adaptive_flushing" as native.

Let's check the adaptive flushing options at this post.

HOW THEY WORK

< adaptive_checkpoint=reflex (older method)>

[Read more]
A micro-benchmark of stored routines in MySQL

Ever wondered how fast stored routines are in MySQL? I just ran a quick micro-benchmark to compare the speed of a stored function against a "roughly equivalent" subquery. The idea -- and there may be shortcomings that are poisoning the results here, your comments welcome -- is to see how fast the SQL procedure code is at doing basically the same thing the subquery code does natively (so to speak).

Before we go further, I want to make sure you know that the queries I'm writing here are deliberately mis-optimized to force a bad execution plan. You should never use IN() subqueries the way I do, at least not in MySQL 5.1 and earlier.

I loaded the World sample database and cooked up this query:

PLAIN TEXT SQL:

  1. SELECT sql_no_cache sum(ci.Population) FROM City AS ci
  2.   WHERE …
[Read more]
RethinkDB performance data.

It’s been a busy and exciting week since we announced RethinkDB. Of all the feedback we received, the most common request was for performance numbers. Before the launch our top priority was correctness. We spent most of our time testing RethinkDB with Wordpress and adding the missing features. As a result, performance suffered. In the past week we tuned the engine back up to high performance. We’re still far from finished with the improvements we want to make, but we feel that we’ve reached a level of performance we can be proud to display.

We wrote our original benchmarking tool in Python, but during our latest benchmarks, we noticed that it was taking about as much time as the engine itself, hiding our real performance numbers. We now have a very small Objective-C program (<900 lines) that uses prepared statements in a tight loop, and times only across the mysql_stmt_execute() call.  For inserts, the …

[Read more]
NILFS – may be not yet

Inspired by NILFS: A File System to Make SSDs Scream and some customers asked if they should try NILFS on their SSD disks I decided to run quick tests to see how it performs.

Installation on our Ubuntu 8.10 with SSD disk (Intel X25-E, 32GB) was pretty plain and I got partition with NILFS without problem. After that I run script for sysbench fileio:

PLAIN TEXT CODE:

  1. for size in 256M 16G; do
  2.    for mode in seqwr seqrd rndrd rndwr rndrw; do
  3.       ./sysbench --test=fileio --file-num=1 --file-total-size=$size prepare
  4.       for threads in 1 4 8; do
  5.          echo PARAMS $size $mode $threads> sysbench-size-$size-mode-$mode-threads-$threads
[Read more]
Dissection of EC2 / EBS volume

So during preparation of XtraDB template for EC2 I wanted to understand what IO characteristics we can expect from EBS volume ( I am speaking about single volume, not RAID as in my previous post). Yasufumi did some benchmarks and pointed me on interesting behavior, there seems several level of caching on EBS volume.

Let me show you. I did sysbench random read IO benchmark on files with size from 256M to 5GB with step 256M. And, as Morgan pointed me, I previously made first write, to avoid first-write penalty:

dd if=/dev/zero of=/dev/sdk bs=1M

for reference script is:

PLAIN TEXT CODE:

  1. #!/bin/sh
  2. set -u
  3. set -x
[Read more]
Showing entries 311 to 320 of 366
« 10 Newer Entries | 10 Older Entries »