As previously mentioned, Darren Cassar has been working on a new automated installer for the DBbenchmark program. It’s now available for download: click here. All you need to do is save it to the directory that you want to install to and then make sure it’s executable: “chmod 700 installer.sh”, then run it “./installer.sh”.
  How long it may take MySQL with Innodb tables to shut down ? It
  can be quite a while.
  In default configuration innodb_fast_shutdown=ON
  the main job Innodb has to do to complete shutdown is flushing
  dirty buffers. The number of dirty buffers in the buffer pool
  varies depending on innodb_max_dirty_pages_pct
  as well as workload and innodb_log_buffer_size
  and can be anywhere from 10 to 90% in the real life workloads.
  Innodb_buffer_pool_pages_dirty status will show
  you the actual data. Now the flush speed also depends on number
  of factors. First it is your storage configuration – you may be
  looking at less than 200 writes/sec for single entry level hard
  drive to tens of thousands of writes/sec for high end SSD card.
  Flushing can be done using multiple threads (in XtraDB and Innodb
  Plugin at least) so it scales well with multiple hard drives. The
  second important variable is your …
So far the benchmarking script supports Linux, FreeBSD, and OSX. I’m installing virtual machines today to get ready for development on the next OS that the community wants to have supported. Vote today for your choice. Development will begin Friday 2010-09-03.
Note: There is a poll embedded within this post, please visit the site to participate in this post's poll.
At yesterdays Eigenbase Developer Meetup at SQLstream‘s offices in San Francisco we arrived at a new logo for LucidDB. DynamoBI is thrilled to have supported and funded the design contest to arrive at our new mascot. Over the coming months you’ll see the logo make it’s way out to the existing luciddb.org sites, wiki sites, etc. I’m really happy to have a logo that matches the nature of our database - BAD ASS!
Often, the first step in evaluating and deploying a database is to load an existing dataset into the database. In the latest version, TokuDB makes use of multi-core parallelism to speed up loading (and new index creation). Using the loader, MySQL tables using TokuDB load 5x-8x faster than with previous versions of TokuDB.
Measuring Load Performance
We generated several different datasets to measure the performance of TokuDB when doing a LOAD DATA INFILE … command. To characterize performance, we vary
- rows to load
 - keys per row
 - row length (including keys)
 
All generated keys, including the primary, are random, 8-byte values. The remaining data, needed to pad out the row length to specified length, is text.
Two files files are produced as part of data generation.
- data file, containing ‘|’ separated fields
 - sql file, containing the …
 
  I get a number of question about contentions/"stuck in..". So
  here comes some explanation to:
- Contention
 - Thread Stuck in
 - What you can do about it
 
  In 99% of the cases the contentions written out in the out file
  of the data nodes (ndb_X_out.log) is nothing to pay attention
  to.
  
  sendbufferpool waiting for lock, contentions: 6000 spins:
  489200
  sendbufferpool waiting for lock, contentions: 6200 spins:
  494721
  
  Each spin is read from the L1 cache (4 cycles on a Nehalem
  (3.2GHz), so about a nanosecond).
  1 spin = 1.25E-09 seconds (1.25ns)
  
  In the above we have:
  (494721-489200)/(6200-6000)= 27 spins/contention
  Time spent on a contention=27 x 1.25E-09=3.375E-08 seconds
  (0.03375 us)
  
  So we don't have a problem..
  
  Another example (here is a lock guarding a job buffer (JBA =
  JobBuffer A, in …
  Join us at this live event in Milan to better understand what’s
  new with MySQL. You will learn more about the current
  and future state of MySQL, now part of the Oracle family of
  products. We will also cover Oracle’s investment in MySQL aiming
  to make it even a better MySQL.
  
  In particular the following topics will be discussed:
- Oracle’s MySQL Strategy
 - What’s New for:
    
- The MySQL Server
 - MySQL Cluster
 - MySQL Enterprise
 - MySQL Workbench
 
 
Stay tuned because we are organizing a similar event in Rome that will be announced soon. Attendance is free, but you’ll need to register in advance. Seats are limited, …
[Read more]Open Query is now three years old! We initially started with consulting and training services, and extended this with our proactive subscriptions that also offers system administration and monitoring.
So how is it going? Pretty well. We’ve been profitable from the start, without funding (beyond a few hundred $ startup costs paid by Arjen) or any credit – by choice. Our objective has never been to grow ridiculously in terms of revenue or number of customers, we simply charge reasonable prices for real service. Right now we have dozens of clients on an ongoing basis, a neat trickle of new clients, and Open Query sustains the livelyhood and lifestyle of a number of people.
For me (Arjen), the three year mark is particularly interesting, since most startups do not make it past their first two years. With our different approach to doing business, we’ve seen our fair share of …
[Read more]Seriously, it did. Sorta.
I use Workbench for my daily work, and it’s a great tool. If you haven’t tried the 5.2 release yet, you should. While performing some maintenance, I happened to issue a DELETE statement against a table which had no indexes (it was 10 rows), and Workbench complained:
  Error Code: 1175
  You are using safe update mode and you tried to update a table
  without a WHERE that uses a KEY column
It turns out this is a new feature in 5.2.26 (and is still there in 5.2.27) – Workbench now uses the equivalent of –safe-updates mode for the mysql command-line client (also known as the –i-am-a-dummy option – seriously). This wasn’t exactly convenient for me, …
[Read more]In database modeling, a m:n relationship is usually resolved by an additional table. But what if this relation is used only for archiving and the number of links in the resulting table is not too high? In that context, I got the idea to store all referring ID's as CSV string directly into a TEXT column of one of the referring tables. I came to this idea, because otherwise I would have to build complicated foreign keys and this way I also save one additional table. Certainly, this only makes sense if the data is not frequently accessed as foreign key. Nevertheless, I would like to tackle the problem, even if the implementation is very MySQL-oriented.