So, what impact does enabling the slow query log have on MySQL?
I decided to run some numbers. I’m using my laptop, as we all know the currently most-deployed database servers have mulitple cores, SSDs and many GB of RAM. For the curious: Intel(R) Core(TM) i7-2620M CPU @ 2.70GHz
The benchmark is going to be:
mysqlslap -u root test -S var/tmp/mysqld.1.sock -q 'select 1;' --number-of-queries=1000000 --concurrency=64 --create-schema=test
Which is pretty much “run a whole bunch of nothing, excluding all the overhead of storage engines, optimizer… and focus on logging”.
My first run was going to be with the slow query log on. I’ll start the server with mysql-test-run.pl as it’s just easy:
Just for the pure insane fun of it, I accepted the challenge of “what can you do with the text format of the schedule?” for BarCampMel. I’m a database guy, so I wanted to load it into a database (which would be Drizzle), and I wanted it to be easy to keep it up to date (this is an unconference after all).
Why is there no index for CSV files? Indexes are very simple.
If the first column of your CSV file is in sorted order you can do a binary search to find your data. But what if you need to find data in the second or third column?
If you have a separate index file pointing to the first byte of each line you could seek to that position in the CSV file and get your data. Given a file with only your needed column and the byte offset and length of the line you can search the index to find the pointer to the position in the CSV file.
Here is a simple perl program to create just[Read more...]
The update has already appeared in the Jenkins update centre, so you should already be able to upgrade to it.
From the desk of your new Bazaar plugin for Jenkins maintainer, I give you Version 1.18.
This release has two good bug fixes:
We’ve been running the same code as this release at Percona for about 2 months now (the second bugfix was one I wanted to test first before submitting upstream). This is the big fix that fixed all our problems with using bazaar with Jenkins in a large deployment.[Read more...]
For Drizzle and for all of the projects we work on at Percona we use the Bazaar revision control system (largely because it’s what we were using at MySQL and it’s what MySQL still uses). We also use Jenkins.
We have a lot of jobs in our Jenkins. A lot. We build upstream MySQL 5.1, 5.5 and 5.6, Percona Server 5.1, Percona Server 5.5, XtraBackup 1.6, 2.0 and 2.1. For each of these we also have the normal trunk builds as well as parameterised ones that allow a developer to test out a tree before they ask for it to be merged. We also have each of these products across seven operating systems and for each of those both x86 32bit and 64bit. If we weren’t already in the[Read more...]
In early 2006 Paul Hurley (ideeli’s CEO) and I (Mark Uhrmacher, CTO) were thinking about a new business. We had the idea to create a community based around great deals for Women’s fashion products where we saw a great deal of potential for great content and product sales. Now, over five years later, we’ve realized much of that vision. Our business success has been chronicled over the years in several places (see here and here). Though we’re very proud of our achievements there, that isn’t what this blog is about.
Insatiable Demand is about a mostly untold story. Over the past five-plus years we’ve built a phenomenal technology platform and team. From two people and three servers to a[Read more...]
Although it is possible to create a view with a nonexistentHow can this be possible?
DEFINERaccount, an error occurs when the view is referenced if the
SQL SECURITYvalue is
DEFINERbut the definer account does not exist.
At Percona, we’re now using sphinx for our documentation. We’re also using Jenkins for our continuous integration. We have compiler warnings from GCC being parsed by Jenkins using the built in filters, but there isn’t one for the sphinx warnings.
Luckily, in the configuration page for Jenkins, the Warnings plugin allows you to specify your own filters. I’ve added the following filter to process warnings from sphinx:[Read more...]
I spent my day doing updates to the automysqlbackup script. Here is some of what I’ve added over the last year.
The bug number fixes are from SourceForge. https://sourceforge.net/tracker/?atid=628964&group_id=101066&func=browse
# 2.5.5 MTG – (2011-07-21)
# – Bug – Typo Ureadable Unreadable config file line 424 – ID: 3316825
# – Bug – Change “#!/bin/bash” to “#!/usr/bin/env bash” – ID: 3292873
# – Bug – problem with excludes – ID: 3169562
# – Bug – Total disk space on symbolic links – ID: 3064547
# – Added DEBUG option to only
So… Baron blogged about wanting higher precision timers from the mysql binary and that running sed on the binary wasn’t cutting it. However… I am not one to give up that easily!
This is what LD_PRELOAD was made for! Evil nasty hacks to make your life easier!
By looking at the mysql.cc source code, I can easily work out how this works… I just have to override two calls! They being sysconf() (we fake how many ticks per second there are) and times() (let’s return a much higher precision number).
Combined with the sed hack on the binary to change the sprintf call to print out the higher precision number, we have:
mysql> select count(*) from t1; +----------+ | count(*) | +----------+ | 710720 |[Read more...]
A long time ago, in a time that can only serve to make some feel old and others older, MySQL didn’t support transactions. Each statement was executed as it went, there was no ROLLBACK (or COMMIT or crash recovery etc). Then there were transactions. Other RDBMSs implement auto_commit functionality, but for MySQL users, we think of it as the magic compatibility mode that (mostly) makes applications written for MyISAM magically work on InnoDB (okay, and making “you should use transactions” a really easy consulting gig :)
I’m currently working on finishing up a patch that removes the implicit COMMIT from DDL operations in Drizzle. Instead, you get an error message saying that Transactional DDL is not currently supported. I see a future where we have one of two situations (possibly depending on[Read more...]
If your storage engine returns an error from rnd_init (or doStartTableScan as it’s named in Drizzle) and does not save this error and return it in any subsequent calls to rnd_next, your engine is buggy. Namely it is buggy in that a) an error may not be reported back to the user and b) everything may explode horribly when rnd_next is called after rnd_init returned an error.[Read more...]
HandlerSocket is cool. But, it turns out there are a few issues.
Justin Swanhart points out HandlerSocket currently lacks atomic operations . Since HandlerSocket uses different connections for reading and writing, you can’t increment/decrement a value without creating a race condition.
Still, the idea of skipping SQL interpretation and just reading the data you know you want is a great one. Writing data might even be better. But being able to use both SQL and NoSQL could be really wonderful. What if we could use complex queries to update complex tables and pluck values out as needed. For example, queries to analyze current weather conditions and produce forecasts that we could then retrieve via a location key? What about updating current condition data[Read more...]
This document was updated and tested for CentOS 6.0
In my last two posts I installed the HandlerSocket plugin into MariaDB and showed how to use it with Perl. That’s good, but if you are thinking of using HandlerSocket I’m guessing you have a very high traffic website and it’s written in PHP. In this post I’m going to connect HandlerSocket with PHP. In the next post I’ll discuss using HandlerSocket on a production system.
There are a couple of HandlerSocket php modules projects. I tried each of them and I found PHP-HandlerSocket was the best. Both of them are still rough and neither of them have documentation beyond their source code. Maybe this will move things forward.
Here are the applications you need to have installed that where not installed in my last two posts. Run this to[Read more...]
While it’s great that MySQL 5.5 is GA with the 5.5.8 release (you can download it here), I’m rather disappointed that the bzr repositories on launchpad aren’t being kept up to date. At time of writing, it looked like this:
Yep – nothing for five weeks in the 5.5 repo – nothing since the 5.5.7 release :(
There hasn’t been zero changes either – the changelog has a decent number of fixes.
In case you haven’t heard yet, I’ve merged in the latest InnoDB from MySQL 5.5.7 into Drizzle. The innobase plugin is now based on InnoDB 1.1.3.
This gets a lot of bug fixes and improvements from 1.1.2 (and on 1.1.1). Enjoy!
I wonder if this comes under “Code Style” or not…
Anyway, Monty and I finished getting Drizzle ready for adding “-Wframe-larger-than=32768″ as a standard compiler flag. This means that no function within the Drizzle source tree can use greater than 32kb stack – it’s a compiler warning – and with -Werror, it means that it’s a build error.
GCC is not perfect at detecting stack usage, but it’s pretty good.
Why have we done this?
Well, there is a little bit of recursion in the server… and we can craft queries to blow a small stack (not so good). On MacOS X, the default thread stack size is only 512kb. This gives not many frames if 32kb stack is a even remotely common.
I found[Read more...]
Following on from my post yesterday on the various states of a Storage Engine, I said I’d have a go with the Cursor object too. A Cursor is used by the Drizzle kernel to get and set data in a table. There can be more than one cursor open at once, and more than one per thread. If your engine cannot cope with this, it is its responsibility to figure it out and return the appropriate errors.
Let’s look at a really simple operation, inserting a couple of rows and then reading them back via a full table scan.
Now, this graph is slightly incomplete as there is no doEndTableScan() call. But you can see[Read more...]
Drizzle still has a number of quirks inherited from the MySQL Storage Engine API (e.g. BLOBs, row buffer, CREATE SELECT and lack of DDL transaction boundaries, key tuple format). One of the things we fixed a long time ago was to have proper methods for StorageEngines to be called for: startTransaction, startStatement, endStatement, commit and rollback.
If you’ve had to implement a transactional storage engine in MySQL you will be well aware of the pattern[Read more...]
It just it trunk – if you have HailDB installed when you build Drizzle, you will now get the HailDB plugin built. You can even run Drizzle with it (remove innobase plugin, load HailDB plugin). Previously, we had problems building both due to symbol conflicts between innobase and HailDB. We’ve fixed this thanks to the linker.
So, enjoy HailDB… welll, test it and report bugs that I can fix :)
Those of you following Drizzle fairly closely have probably noticed that we’ve lagged behind in InnoDB versions. I’m actively working on fixing that – both for the innobase plugin and for the HailDB library.
If building the HailDB plugin (which is planned to replace the innobase plugin), you’ll need the latest[Read more...]
Just in case you missed it, I’m rather thrilled that our latest tarball of Drizzle is named Beta. Specifically, we’re calling it Drizzle7. Seven is a very nice number, and it seems rather appropriate.
This release is for a stand alone database server. A lot of the infrastructure for replication is there (with testing), but the big thing we want to hammer on and get perfect here is Drizzle7 as a stand alone database server.
Can I trust it? If you trust InnoDB to store your data, then yes, you can trust Drizzle (it uses InnoDB too)
Yesterday, I reached a happy milestone in HailDB development. All compiler warnings left in the api/ directory (the public interface to the database engine) are now either probable/possible bugs (that we need to look at closely) or are warnings due to unfinished code (that we should finish).
There’s still a bunch of compiler warnings that we’ve inherited (HailDB compiles with lots of warnings enabled) that we have to get through, but a lot will wait until after we update the core to be based on InnoDB 1.1.