I have been trying to analyse a number of new patches we've
developed for MySQL to see their scalability. However I've
have gotten very strange results which didn't at all
compare
with my old results and most of changes gave negative impact
:(
Not so nice.
As part of debugging the issues with sysbench I decided to
go
back to the original version I used previously (sysbench
0.4.8).
Interestingly even then I saw a difference on 16 and 32
threads
whereas on 1-8 threads and 64+ threads the result were the
same
as usual.
So I checked my configuration and it turned out that I had
changed
log file size to 200M from 1300M and also used 8 read and
write
threads instead of 4. I checked quickly and discovered that
the
parameter that affected the sysbench results was the log file
size.
So increasing the log file size from 200M to 1300M …
I wrote a small DTrace script to understand InnoDB IO statistics. This script shows statistics about different kinds of Innodb IO requests and how many of them result in actual IO. Sample output is shown below
#./inniostat -h
Usage: inniostat [-h] [-d] [-p pid] [interval]
-h : Print this message
-p : MySQL PID
-d : Dump dtrace script being used
# ./inniostat
__physical__ ___Innodb___ ____read____ ______write______
r/s w/s r/s w/s data pre log dblbuf dflush Time
24 121 24 50 24 0 50 0 0 16:00:57
26 130 26 51 26 0 51 0 0 16:00:58
18 134 18 54 18 0 54 0 0 16:00:59
25 129 25 51 25 0 51 0 0 16:01:00
29 116 46 47 17 29 47 0 0 16:01:01
10 140 10 132 10 0 52 0 80 …[Read more]
I wrote a small DTrace script to understand InnoDB IO statistics. This script shows statistics about different kinds of Innodb IO requests and how many of them result in actual IO. Sample output is shown below
#./inniostat -h
Usage: inniostat [-h] [-d] [-p pid] [interval]
-h : Print this message
-p : MySQL PID
-d : Dump dtrace script being used
# ./inniostat
__physical__ ___Innodb___ ____read____ ______write______
r/s w/s r/s w/s data pre log dblbuf dflush Time
24 121 24 50 24 0 50 0 0 16:00:57
26 130 26 51 26 0 51 0 0 16:00:58
18 134 18 54 18 0 54 0 0 16:00:59
25 129 25 51 25 0 51 0 0 16:01:00
29 116 46 47 17 29 47 0 0 16:01:01
10 140 10 132 10 0 52 0 80 …[Read more]
I wrote a small DTrace script to understand InnoDB IO statistics. This script shows statistics about different kinds of Innodb IO requests and how many of them result in actual IO. Sample output is shown below
#./inniostat -h
Usage: inniostat [-h] [-d] [-p pid] [interval]
-h : Print this message
-p : MySQL PID
-d : Dump dtrace script being used
# ./inniostat
__physical__ ___Innodb___ ____read____ ______write______
r/s w/s r/s w/s data pre log dblbuf dflush Time
24 121 24 50 24 0 50 0 0 16:00:57
26 130 26 51 26 0 51 0 0 16:00:58
18 134 18 54 18 0 54 0 0 16:00:59
25 129 25 51 25 0 51 0 0 16:01:00
29 116 46 47 17 29 47 0 0 16:01:01
10 140 10 132 10 0 52 0 80 …[Read more]
When I was doing data loading tests, I realized that usually low checksum calculation CPU percentage is actually the blocking factor. See, usually when background writers do the flushing, it gets parallelized, but if active query is forcing a checkpoint, it all happens in ‘foreground’ thread, checksum computation included. This is where more Sun-ish wisdom (these people tune kernel with debugger all the time) comes in:
gdb -p $(pidof mysqld) -ex "set srv_use_checksums=0" --batch
Puff. Everything becomes much faster. Of course, one would be able to restart the server with –skip-innodb-checksums, but that would interrupt the whole process, etc. Of course, proper people would implement tunable parameter (5 lines of code, or so), but anyone with Solaris experience knows how to tune stuff with debuggers, hahaha.
Odd though, I …
[Read more]One of the cool things about talking about MySQL performance with ZFS is that there is not much tuning to be done Tuning with ZFS is considered evil, but a necessity at times. In this blog I will describe some of the tunings that you can apply to get better performance with ZFS as well as point out performance bugs which when fixed will nullify the need for some of these tunings.
For the impatient, here is the summary. See below for the reasoning behind these recommendations and some gotchas.
- Match ZFS recordsize with Innodb page size (16KB for Innodb Datafiles, and 128KB for Innodb log files).
- If you have a write heavy workload, use a Seperate ZFS Intent Log.
- If your database working set size does not fit in memory, …
One of the cool things about talking about MySQL performance with ZFS is that there is not much tuning to be done Tuning with ZFS is considered evil, but a necessity at times. In this blog I will describe some of the tunings that you can apply to get better performance with ZFS as well as point out performance bugs which when fixed will nullify the need for some of these tunings.
For the impatient, here is the summary. See below for the reasoning behind these recommendations and some gotchas.
- Match ZFS recordsize with Innodb page size (16KB for Innodb Datafiles, and 128KB for Innodb log files).
- If you have a write heavy workload, use a Seperate ZFS Intent Log.
- If your database working set size does not fit in memory, …
One of the cool things about talking about MySQL performance with ZFS is that there is not much tuning to be done Tuning with ZFS is considered evil, but a necessity at times. In this blog I will describe some of the tunings that you can apply to get better performance with ZFS as well as point out performance bugs which when fixed will nullify the need for some of these tunings.
For the impatient, here is the summary. See below for the reasoning behind these recommendations and some gotchas.
- Match ZFS recordsize with Innodb page size (16KB for Innodb Datafiles, and 128KB for Innodb log files).
- If you have a write heavy workload, use a Seperate ZFS Intent Log.
- If your database working set size does not fit in memory, …
People keep loving and endorsing the –innodb-file-per-table. Then poor new users read about that, get confused, start using –innodb-file-per-table, and tell others to. Others read then, get confused even more, and start using –innodb-file-per-table, then write about it. Then…
Oh well. Here, some endorsements and FUD against one-tablespace-to-unite-them-all:
This same nice property also translates to a not so nice one: data can be greatly fragmented across the tablespace.
Of course, having file-per-table will mean that only one table will be in a file, so, kind of, it will not be ‘mixed’… inside file. Now, when data grows organically (not when you restore few-hundred-gigabyte dump sequentially), all those files grow and start getting fragmented (at ratios depending on how smart filesystem is, and.. how many …
[Read more]
Today I created a patch that builds on the Google v3
patch where I added some ideas of my own and some ideas
from the Percona patches. The patch is here.
Here is a reference to the patch derived from the Google
v3 patch.
Here is a reference to my original patch (this is
likely to
contain a bug somewhere so usage for other than
benchmarking
isn't recommended).
So it will be interesting to see a comparison of all those
variants directly against each other on a number of benchmarks.