Showing entries 27161 to 27170 of 44123
« 10 Newer Entries | 10 Older Entries »
Covering Indexes: Orders-of-Magnitude Improvements

The talk I gave at the Percona Performance Conference at the MySQL
Users Conference in April 2009 can be found
at http://tokutek.com/images/blog/mysqluc09/kuszmaul-mysqluc-percona-09-slides.pdf.

This talk provides some examples where covering indexes help, and
then describes a performance model that can be used to understand and
predict query performance.  It covers clustering indexes (which are a
kind of “universal” covering index), and describes the asymptotic
performance of Fractal Tree indexing (but sorry, it doesn’t yet
explain how Fractal Tree indexes work.) We’re working on writing a
white paper to explain how they work, but we’ve simply been too
busy.  The talk concludes with the graph (shown above) that
illustrates iiBench …

[Read more]
x-25e, 25% reduction in random writes…

So in my previous post I showed some benchmarks showing a large drop off in performance when you fill the x-25e. I wanted to followup and say this: even if you do everything correctly ( i.e. leave 50%+ space free, disable controller cache etc ) you may still see a drop in performance if your workload is heavily write skewed.  To show this I ran a 100% random read sysbench fileio test over a 12GB dataset (37.5% full ) , the tests were run back-to-back over a several hours , here is what we see:

*Note the scale is a little skewed here ( i start at 2500 reqs ).

Each data point represents 2 million IO’s, so somewhere after about 6 million IO’s we start to drop.  At the end it looks like we stabilize around2900-3000 requests per second, an overall drop of about 25%.

Partitioning Presentation from MySQL Conference

It appears that my presentation isn’t available at the conference site. I’ve added it below.   Sorry for the delay. 

Without some context that I talked about at the conference the presentation may not make sense.

Partitioning creates a grain in the table.  Select queries that go with that grain can be quicker, at times much faster, but select queries that go against that grain can be slower, at times much slower. 

An example is a table that is partitioned by date and has an orderId primary key.  Queries that select data by date will either be almost as fast (partitioning can add a slight overhead) as the non-partitioned table to much faster.   But queries that don’t query by date will have to query all the partitions.  A table partitioned 12 times, one partition for each month, results in 12 index partitions.  As the indexes are partitioned by date, a query by the …

[Read more]
Comment Search


You guys are generating an amazing amount of feedback on your blogs. Matt mentioned in the April Wrap-Up that there were 8.6 million comments! Comments are flying in every second of the day.

And have you ever had one of those blog posts that was good, but the real action was in the comments? The blog post is only half the story, it’s the feedback from everyone else that fills in the rest. To make it easier to find the second half of these stories we’ve added comment search to WordPress.com search.

Select the comments options from the WordPress.com search page and we’ll hunt through the millions of comments that have been added to WordPress.com blogs to find what you are looking for. To …

[Read more]
MySQL 5.1.34 and XtraDB 1.0.3-5

For a couple weeks now, we've had a MySQL server at work running MySQL 5.1.34 and the Percona XtraDB 1.0.3-5 plug-in. I'm testing an upgrade path for our current MySQL 5.0.xx based servers.

Aside from some confusion about the initial setup (getting the built-in InnoDB to stay out of the way), things have gone very well. All of our largest and most active tables have been converted to the new Barracuda file format and I tested compression on the two largest. The first didn't fare so well, but it's a fairly over-indexed table with small rows. The second, however, contains a decent sized TEXT column (classified posting bodies) and it compresses quite nicely. Any change in CPU utilization is not significant.

I hope to soon …

[Read more]
Customizing db_STRESS

One of our colleagues, Dimitri, at the Paris Sun solution center has developed a real neat and useful tool called dim_STAT. To make it short it's a tool for both high-level and detailed, monitoring and performance analysis of Solaris and Linux systems.

Data is collected and saved in a MySQL database, and it provides a very functional web base user interface. It allows real time or off line monitoring, multi-host etc.

Actually what is really interesting with dim_STAT is that, when I'm benchmarking or tryinng to find a performance bottleneck, I can collect all the data I need and come back later for analysis.

Recently, Dimitri has added a new tool db_STRESS, that allows us to put load on a database system and gives a high level metric (TPS: Transactions per seconds) and therefor allows us to compare how different systems compare together.
The point of this post is …

[Read more]
Customizing db_STRESS

One of our colleagues, Dimitri, at the Paris Sun solution center has developed a real neat and useful tool called dim_STAT. To make it short it's a tool for both high-level and detailed, monitoring and performance analysis of Solaris and Linux systems.

Data is collected and saved in a MySQL database, and it provides a very functional web base user interface. It allows real time or off line monitoring, multi-host etc.

Actually what is really interesting with dim_STAT is that, when I'm benchmarking or tryinng to find a performance bottleneck, I can collect all the data I need and come back later for analysis.

Recently, Dimitri has added a new tool db_STRESS, that allows us to put load on a database system and gives a high level metric (TPS: Transactions per seconds) and therefor allows us to compare how different systems compare together.
The point of this post is …

[Read more]
AWS Experience Part 3: Trying Another Instance

Hi all,

Hmmmm...

I switched on my machine this morning thinking a new day will bring new results. Nope. Same old results. As I mentioned in an earlier blog entry, I created a server instance using Fedora with LAMP, complete with MySQL. Hmmm.. Sound good? Well, it comes with MySQL 4.1. Normally an update would be a straightforward process. Since Fedora 8 doesn't come with an

apt-get

command, I chose to go the

yum

route. I tried

 yum update mysql-server

. No dice. problems. I spent another 30 minutes or so, trying to correct the problem, but to no avail. So I decided to create a whole new server instance with Fedora 8 and no MySQL. I manually installed MySQL on the machine, by doing the following:

  • I download the MySQL sever, cient, and headers and libraries from the MySQL.com.
        wget …
[Read more]
AWS Experience Part 3: Trying Another Instance

Hi all,

Hmmmm...

I switched on my machine this morning thinking a new day will bring new results. Nope. Same old results. As I mentioned in an earlier blog entry, I created a server instance using Fedora with LAMP, complete with MySQL. Hmmm.. Sound good? Well, it comes with MySQL 4.1. Normally an update would be a straightforward process. Since Fedora 8 doesn't come with an

apt-get

command, I chose to go the

yum

route. I tried

 yum update mysql-server

. No dice. problems. I spent another 30 minutes or so, trying to correct the problem, but to no avail. So I decided to create a whole new server instance with Fedora 8 and no MySQL. I manually installed MySQL on the machine, by doing the following:

  • I download the MySQL sever, cient, and headers and libraries from the MySQL.com.
        wget …
[Read more]
Why MySQL’s binlog-do-db option is dangerous

I see a lot of people filtering replication with binlog-do-db, binlog-ignore-db, replicate-do-db, and replicate-ignore-db. Although there are uses for these, they are dangerous and in my opinion, they are overused. For many cases, there's a safer alternative.

The danger is simple: they don't work the way you think they do. Consider the following scenario: you set binlog-ignore-db to "garbage" so data in the garbage database (which doesn't exist on the slave) isn't replicated. (I'll come back to this in a second, so if you already see the problem, don't rush to the comment form.)

Now you do the following:

PLAIN TEXT CODE:

  1. $ mysql
  2. mysql> delete from garbage.junk;
  3. mysql> use garbage;
  4. mysql> update production.users set disabled = 1 where user …
[Read more]
Showing entries 27161 to 27170 of 44123
« 10 Newer Entries | 10 Older Entries »