Showing entries 61 to 70 of 110
« 10 Newer Entries | 10 Older Entries »
Displaying posts with tag: myisam (reset)
Why you should ignore MySQL’s key cache hit ratio

I have not caused a fist fight in a while, so it's time to take off the gloves. I claim that somewhere around of 99% of advice about tuning MySQL's key cache hit ratio is wrong, even when you hear it from experts. There are two major problems with the key buffer hit ratio, and a host of smaller ones. If you make some assumptions that are very hard to prove, there actually is a very limited use for the statistics from which the ratio is derived (but not the ratio itself, which you should ignore). Read on for the details.

In this article, I will use key buffer, key_buffer, and key cache interchangeably. However, I will be careful about the difference between "rate" and "ratio". In this article, the key cache miss rate is defined as the number of misses per unit of time, with the units of operations per second. The key cache miss ratio is the ratio between reads from the disk and reads from the cache; it is a dimensionless …

[Read more]
How To Fix Intermittent MySQL Errcode 13 Errors On Windows

The Problem

I've had MySQL on my Windows 7 laptop for a bit (as part of wampserver), mostly for local offline WordPress development.

However, even though MySQL is relatively stable, I've been observing a vast quantity of intermittent MySQL errors, as reported by WordPress in the PHP error log (C:\wamp\logs\php_error.log). Here are some examples:

[05-Jan-2010 09:47:51] WordPress database error Error on delete of
'C:\Windows\TEMP\#sql17e0_1a2_6.MYD' (Errcode: 13) for query SELECT t.*, tt.* 
FROM wp_terms AS t INNER JOIN wp_term_taxonomy AS tt ON tt.term_id = t.term_id 
INNER JOIN wp_term_relationships AS tr ON tr.term_taxonomy_id = 
tt.term_taxonomy_id WHERE tt.taxonomy IN ('category') AND tr.object_id IN (3) 
ORDER BY t.name ASC made by require, require_once, include, get_footer, 
locate_template, load_template, require_once, dynamic_sidebar, 
call_user_func_array, …
[Read more]
Drizzle FRM replacement: the table proto

Drizzle originally inherited the FRM file from MySQL (which inherited it from UNIREG). The FRM file stores metadata about a table; what columns it has, what type those columns are, what indexes, any default values, comments etc are all stored in the FRM. In the days of MyISAM, this worked relatively well. The row data was stored in table.MYD, indexes on top of it in table.MYI and information about the format of the row was
in table.FRM. Since MyISAM itself wasn’t crash safe, it didn’t really matter if creating/deleting the FRM file along with the table was either.

As more sophisticated engines were introduced (e.g. InnoDB) that had their own data dictionary, there started to be more of a problem. There were now two places storing information about a table: the FRM file and the data dictionary specific to the engine. Even if the data dictionary of the storage engine was crash safe, the FRM file was not plugged into that, so you …

[Read more]
How many partitions can you have ?

I had an interesting case recently. The customer dealing with large MySQL data warehouse had the table which was had data merged into it with INSERT ON DUPLICATE KEY UPDATE statements. The performance was extremely slow. I turned out it is caused by hundreds of daily partitions created for this table. What is the most interesting (and surprising) not every statement is affected equally as you can see from the benchmarks above:

I got the following test table created:

PLAIN TEXT SQL:

  1. CREATE TABLE `p10` (
  2.     `id` int(10) UNSIGNED NOT NULL,
  3.     `c`  int(10) UNSIGNED NOT NULL,
  4.      PRIMARY KEY (`id`),
  5.      KEY(c)
  6.      ) ENGINE=InnoDB
  7.      PARTITION BY RANGE(id) (
  8.     …
[Read more]
More on table_cache

In my previous post I looked into how large table_cache actually can decrease performance. The "miss" path is getting more expensive very quickly as table cache growths so if you're going to have high miss ratio anyway you're better off with small table cache.

What I have not checked though is how does table_cache (or table_open_cache in newer version) size affects the hit path.

I started with the same test as last time - created 100000 tables and read all of them at once to make sure all table cache entries are populated. When I tried repeatedly reading 1000 empty tables with table_cache of 20000 and 2000. With Table Cache of 2000 I got about 16400 selects/sec with Table Cache of 20000 13500 selects/sec. So there is some slow down in hit path as well though it is not as large as with miss …

[Read more]
Redis, Memcached, Tokyo Tyrant and MySQL comparision (rectification skip-name-resolve)

My previous post Redis, Memcache, Tokyp Tyrant, MySQL comparison had a flaw as pointed out by this comment. The MySQL was taking a huge time for doing a reverse DNS lookup.

I turned on the skip-name-resolve parameter in the my.cnf and the Throughput of MySQL grew considerably, almost more than double.

read more

MySQL > YourSQL

Since I started doing the occasional consulting job for Open Query, I've seen a lot of MySQL servers that have been installed once and then forgotten about. This gave me the idea to do a short presentation about some basic MySQL server configuration. The first go was at DrupalCampMelbourne and I recently tried (and failed) to cram it into a three minute lightning talk slot at the LUV September meeting.

The title of the talk is (now) MySQL > YourSQL. I chose this not because I think that MySQL is better than the $other_database you use or because I may or may not run a newer version of MySQL on better hardware, but because I use InnoDB and …

[Read more]
Quick comparison of MyISAM, Infobright, and MonetDB

Recently I was doing a little work for a client who has MyISAM tables with many columns (the same one Peter wrote about recently). The client's performance is suffering in part because of the number of columns, which is over 200. The queries are generally pretty simple (sums of columns), but they're ad-hoc (can access any columns) and it seems tailor-made for a column-oriented database.

I decided it was time to actually give Infobright a try. They have an open-source community edition, which is crippled but not enough to matter for this test. The "Knowledge Grid" architecture seems ideal for the types of queries the client runs. But hey, why not also try MonetDB, another open-source column-oriented database I've been …

[Read more]
Backup MySQL in a Second with ZFS

MySQL backup soon becomes an important matter when the database is used in production. The pain-point comes from the fact that while backuping the database is not available to respond to client requests anymore. With mysqldump - the standard tool for performing MySQL backups - and a large database the operation can go over many tenth of minutes if not hours. If I am running my business on line this is simply not acceptable.

The classical approach to workaround this problem is to take advantage of MySQL replication. I set up a master/slave configuration where the slave acts as copy of the master. Then, when needed, I run mysqldump on the slave without any service interruption on the master.

But ZFS snapshosts bring a new straightforward approach that avoids the pain and the complexity of a master/slave replication.

Snapshots are a key feature of ZFS that allows me to save a copy of …

[Read more]
How number of columns affects performance ?

It is pretty understood the tables which have long rows tend to be slower than tables with short rows. I was interested to check if the row length is the only thing what matters or if number of columns we have to work with also have an important role. I was interested in peak row processing speed so I looked at full table scan in case data fits in OS cache completely. I created 3 tables - First containing single tinyint column which is almost shortest type possible (CHAR(0) could be taking less space), table with 1 tinyint column and char(99) column and table with 100 tinyint columns. The former two tables have the same row length but have number of column different 50 times. Finally I have created 4th table which is also 100 columns but one of them is VARCHAR causes raw format to be dynamic.

More specially:

PLAIN TEXT SQL:

  1. CREATE TABLE `t1` (
[Read more]
Showing entries 61 to 70 of 110
« 10 Newer Entries | 10 Older Entries »