Showing entries 311 to 320 of 368
« 10 Newer Entries | 10 Older Entries »
Displaying posts with tag: Benchmarks (reset)
MySQL-Memcached or NOSQL Tokyo Tyrant – part 2

Part 1 of our series set-up our "test"  application and looked at boosting performance of the application by buffer MySQL with memcached.  Our test application is simple and requires only 3 basic operations per transaction 2 reads and 1 write.  Using memcached combined with MySQL we ended up nearly getting a 10X performance boost from the application.  Now we are going to look at what we could achieve if we did not have to write to the database at all.  So let's look at what happens if we push everything including writes into memcached.

Wow that's shockingly fast isn't it! I guess being completely in memory helps for this app.  What is very interesting is accessing 100% of the data in memcached gives very similar numbers to accessing 100% of the data in memory in the DB ( part 1 …

[Read more]
MySQL-Memcached or NOSQL Tokyo Tyrant – part 1

All to often people force themselves into using a database like MySQL with no thought into whether if its the best solution to there problem. Why?  Because their other applications use it, so why not the new application?  Over the past couple of months I have been doing a ton of work for clients who use their database like most people use memcached .  Lookup a row based on a key, update the data in the row, stuff the row back in the database.  Rinse and repeat.  Sure these setups vary sometimes, throwing in a “lookup” via username, or even the rare count.  But for the most part they are designed to be simple.

A classic example is a simple online game.  An online game may only require that an application retrieve a single record from the database.  The record may contain all the vital stats for the game, be updated and stuffed back into the database.  You would be surprised how many people use …

[Read more]
Tuning for heavy writing workloads

For the my previous post, there was comment to suggest to test db_STRESS benchmark on XtraDB by Dimitri. And I tested and tuned for the benchmark. I will show you the tunings. It should be also tuning procedure for general heavy writing workloads.

At first, <tuning peak performance>. The next, <tuning purge operation> to stabilize performance  and to avoid decreasing performance.

<test condition>

Server:
PowerEdge R900, Four Quad Core E7320 Xeon, 2.13GHz, 32GB Memory, 16X2GB, 667MHz

db_STRESS:
32 sessions, RW=1, dbsize = 1000000, no thinktime

XtraDB: (mysql-5.1.39 + XtraDB-1.0.4-current)
innodb_io_capacity = 4000
innodb_support_xa = false
innodb_file_per_table = …

[Read more]
Rethinking B-tree block sizes on SSDs

One of the first questions to answer when running databases on SSDs is what B-tree block size to use. There are a number of factors that affect this decision:

  • The type of workload
  • I/O time to read and write the block size
  • The size of the cache

That’s a lot of variables to consider. For this blog post we assume a fairly common OLTP scenario – a database that’s dominated by random point queries. We will also sidestep some of the more subtle caching effects by treating the caching algorithm as perfectly optimal, and assuming the cost of lookup in RAM is insignificant.

Even with these restrictions it isn’t immediately obvious what is the optimal block size. Before discussing SSDs, let’s quickly address this problem on rotational drives. If we benchmark the number of IOPS for different block sizes on a typical rotation drive we get the following graph:

There are two …

[Read more]
Analyzing air traffic performance with InfoBright and MonetDB

Accidentally me and Baron played with InfoBright (see http://www.mysqlperformanceblog.com/2009/09/29/quick-comparison-of-myisam-infobright-and-monetdb/) this week. And following Baron's example I also run the same load against MonetDB. Reading comments to Baron's post I tied to load the same data to LucidDB, but I was not successful in this.

I tried to analyze a bigger dataset and I took public available data
http://www.transtats.bts.gov/DL_SelectFields.asp?Table_ID=236&DB_Short_Name=On-Time about USA domestic flights with information about flight length and delays.

The data is available from 1988 to 2009 in chunks per month, so I downloaded 252 files (for 1988-2008 years) with …

[Read more]
How number of columns affects performance ?

It is pretty understood the tables which have long rows tend to be slower than tables with short rows. I was interested to check if the row length is the only thing what matters or if number of columns we have to work with also have an important role. I was interested in peak row processing speed so I looked at full table scan in case data fits in OS cache completely. I created 3 tables - First containing single tinyint column which is almost shortest type possible (CHAR(0) could be taking less space), table with 1 tinyint column and char(99) column and table with 100 tinyint columns. The former two tables have the same row length but have number of column different 50 times. Finally I have created 4th table which is also 100 columns but one of them is VARCHAR causes raw format to be dynamic.

More specially:

PLAIN TEXT SQL:

  1. CREATE TABLE `t1` (
[Read more]
SSD Market Continues to Heat Up

I had originally posted this on the 16th of September, but I had been changing hosting providers and such and it has managed to drop through the cracks.  So, if you didn’t see it before here it is..

I have long held the opinon that SSD (Solid State Disk) drives are going to be a major part of the database future. I just checked and I wrote a blog posting about them two years ago. I am not alone in this opinion.  It has long been realized that both I/O access speed and throughput increases have not kept pace with the increases in CPU power and the steadily decreasing cost of RAM. Storage space has increased, but both access speed and throughput performance have only had marginal increases in performance.

Solid state disks have long held the promise of lowered access speeds, especially when it comes to random access.  Even so, prices for SSD drives have been high and space small (compared to standard hard …

[Read more]
Which adaptive should we use?

As you may know, InnoDB has 2 limits for unflushed modified blocks in the buffer pool. The one is from physical size of the buffer pool. And the another one is oldness of the block which is from the capacity of transaction log files.

In the case of heavy updating workload, the modified ages of the many blocks are clustered. And to reduce the maximum of the modified ages InnoDB needs to flush many of the blocks in a short time, if these are not flushed at all. Then the flushing storm affect the performance seriously.

We suggested the "adaptive_checkpoint" option of constant flushing to avoid such a flushing storm. And finally, the newest InnoDB Plugin 1.0.4 has the new similar option "adaptive_flushing" as native.

Let's check the adaptive flushing options at this post.

HOW THEY WORK

< adaptive_checkpoint=reflex (older method)>

[Read more]
A micro-benchmark of stored routines in MySQL

Ever wondered how fast stored routines are in MySQL? I just ran a quick micro-benchmark to compare the speed of a stored function against a "roughly equivalent" subquery. The idea -- and there may be shortcomings that are poisoning the results here, your comments welcome -- is to see how fast the SQL procedure code is at doing basically the same thing the subquery code does natively (so to speak).

Before we go further, I want to make sure you know that the queries I'm writing here are deliberately mis-optimized to force a bad execution plan. You should never use IN() subqueries the way I do, at least not in MySQL 5.1 and earlier.

I loaded the World sample database and cooked up this query:

PLAIN TEXT SQL:

  1. SELECT sql_no_cache sum(ci.Population) FROM City AS ci
  2.   WHERE …
[Read more]
RethinkDB performance data.

It’s been a busy and exciting week since we announced RethinkDB. Of all the feedback we received, the most common request was for performance numbers. Before the launch our top priority was correctness. We spent most of our time testing RethinkDB with Wordpress and adding the missing features. As a result, performance suffered. In the past week we tuned the engine back up to high performance. We’re still far from finished with the improvements we want to make, but we feel that we’ve reached a level of performance we can be proud to display.

We wrote our original benchmarking tool in Python, but during our latest benchmarks, we noticed that it was taking about as much time as the engine itself, hiding our real performance numbers. We now have a very small Objective-C program (<900 lines) that uses prepared statements in a tight loop, and times only across the mysql_stmt_execute() call.  For inserts, the …

[Read more]
Showing entries 311 to 320 of 368
« 10 Newer Entries | 10 Older Entries »