Showing entries 22876 to 22885 of 44120
« 10 Newer Entries | 10 Older Entries »
BLOBS in the Drizzle/MySQL Storage Engine API

Another (AFAIK) undocumented part of the Storage Engine API:

We all know what a normal row looks like in Drizzle/MySQL row format (a NULL bitmap and then column data):

Nothing that special. It’s a fixed sized buffer, Field objects reference into it, you read out of it and write the values into your engine. However, when you get to BLOBs, we can’t use a fixed sized buffer as BLOBs may be quite large. So, the format with BLOBS is the bit in the row is a length of the blob (1, 2, 3 or 4 bytes – in Drizzle it’s only 3 or 4 bytes now and soon only 4 bytes once we fix a bug that isn’t interesting to discuss here). The Second part of the in-row part is a pointer to a location in memory where the BLOB is stored. So a row that has a BLOB in it looks something like this:

[Read more]
Methods for searching errors in SQL application

Some time ago I wrote in Russian language guide for finding errors in SQL application.

To be honest I wrote it having personal aim to have a text which I can easily use refer in case of user questions about how to find particular thing. But this makes less sense having no English version. So now I started to translate it to English and publish. Introduction and first chapter are ready.

You can find it at http://sql-error.microbecal.com/en/index.html  Comments and corrections of mistakes are welcome here.

FlashCache: tpcc workload

This is my last post in series on FlashCache testing when the cache is placed on Intel SSD card.

This time I am using tpcc-like workload with 1000 Warehouses ( that gives 100GB of data) on Dell PowerEdge R900 with 32GB of RAM, 22GB allocated for buffer pool and I put 70GB on FlashCache partition ( just to simply test the case when data does not fit into cache partition).

Please note in this configuration the benchmark is very write intensive, and it is not going be easy for FlashCache, as in background it has to write blocks to RAID anyway, and write rate in final place is limited by RAID. So all performance benefits will come from read hits

The full report and results are available on benchmark Wiki
http://www.percona.com/docs/wiki/benchmark:flashcache:tpcc:start.

Short version of …

[Read more]
Methods for searching errors in SQL application

Some time ago I wrote in Russian language guide for finding errors in SQL application.

To be honest I wrote it having personal aim to have a text which I can easily use refer in case of user questions about how to find particular thing. But this makes less sense having no English version. So now I started to translate it to English and publish. Introduction and first chapter are ready.

You can find it at http://sql-error.microbecal.com/en/index.html  Comments and corrections of mistakes are welcome here.

Dirty pages, fast shutdown, and write combining

One of the things that makes a traditional transactional database hard to make highly available is a relatively slow shutdown and start-up time. Applications typically delegate most or all writes to the database, which tends to run with a lot of “dirty” data in its (often large) memory. At shutdown time, the dirty memory needs to be written to disk, so the recovery routine doesn’t have to run at startup. And even upon a clean startup, the database probably has to warm up, which can also take a very long time.

Some databases let the operating system handle most of their memory management needs. This has its own challenges, especially if the operating system’s design doesn’t align exactly with the database’s goals. Other databases take matters into their own hands. InnoDB (the de facto transactional MySQL storage engine) falls …

[Read more]
Reacting to small variations in response time

I wrote recently about early detection for MySQL performance problems. If your server is having micro-fluctuations in performance, it’s important to know, because very soon they will turn much worse. What can you do about this? The most important thing is not to guess at what’s happening, but to measure instead. I have seen these problems from DNS, the binary log, failing hardware, the query cache, the table cache, the thread cache, and a variety of InnoDB edge cases.

fast paging in the real world

Some time ago I attended the "Optimisation by Design" course from Open Query¹. In it, Arjen teaches how writing better queries and schemas can make your database access much faster (and more reliable). One such way of optimising things is by adding appropriate query hints or flags. These hints are magic strings that control how a server executes a query or how it returns results.

An example of such a hint is SQL_CALC_FOUND_ROWS. You use it in a select query with a LIMIT clause. It instructs the server to select a limited numbers of rows, but also to calculate the total number of rows that would have been returned without the limit clause in place. That total number of rows is stored in a session variable, which can be retrieved via SELECT FOUND_ROWS();  That simply reads the variable and clears it on the server, it doesn't actually have to look at any table or index data, so it's very fast.

[Read more]
fast paging in the real world

Some time ago I attended the "Optimisation by Design" course from Open Query¹. In it, Arjen teaches how writing better queries and schemas can make your database access much faster (and more reliable). One such way of optimising things is by adding appropriate query hints or flags. These hints are magic strings that control how a server executes a query or how it returns results.

An example of such a hint is SQL_CALC_FOUND_ROWS. You use it in a select query with a LIMIT clause. It instructs the server to select a limited numbers of rows, but also to calculate the total number of rows that would have been returned without the limit clause in place. That total number of rows is stored in a session variable, which can be retrieved via SELECT FOUND_ROWS();  That simply reads the variable and clears it on the server, it doesn't actually have to look at any table or index data, so it's very fast.

[Read more]
MySQL, MyISAM, fallocate, and seekwatcher.

I’ve been meaning on posting about this for a while now but I finally have a good tool to help visualize this problem (seekwatcher).

MyISAM continues to append to the .MYD file as you write to it. Which seems pretty easy to manage from a performance standpoint because if you’re writing 1 file on one disk it will be 100% contiguous.

But what happens if you’re writing 100 files? or 1000? The file becomes fragmented on disk (in a more pure sense, a fresh disk) because each new write is stacked up on top of the previous file’s write.

What needs to happen is that MyISAM needs to fallocate 5-10MB at a time. This way for at least the next 5MB you have a large chunk of contiguous disk to use.

This isn’t just theoretical. Check out the following video. This is on a 11 disk RAID …

[Read more]
Down the rabbit hole

Generally I avoid going down rabbit holes but today I decided to see how deep a particular testing rabbit hole went. This post is a third in what seems be a continuing series of programming anecdotes. It’s not particularly MySQL-related so you can stop reading here unless you grok code stuff.

Before beginning work on issue 720 I ran the mk-table-checksum test suite to make sure it was in working order. No sense writing new tests and code when the old tests and code aren’t reliable. I actually made one seemingly innocuous change to the test suite in preparation for the issue: I changed the –replicate checksum table from MyISAM to InnoDB.

Surprisingly, the test suite proved unstable. Random tests would fail at random times. Some instability was due to new tests for …

[Read more]
Showing entries 22876 to 22885 of 44120
« 10 Newer Entries | 10 Older Entries »