If you are going to load a lot of records into Cluster, don't
forget to set max_rows!
My colleague, Yves at BigDBAhead, has also blogged
about this, but I also ran into the same problem recently.
I did try to populate 100M records on a 4 node cluster, and the
data nodes went down with the following error message in the
error logs:
"2304 Array index out of range"
So the error message is crap - and my opinion is that there
should be a proper error message propagated up to the mysql
server. There is a bug
report on this.
Simplified, what the error message means is that you have run out
of "index slots" in the Hash Table storing the hashes of the
Primary Keys. This is because each table is divided into a number
of partitions, and each partition …
In MySQL 5.1.33 there is a fix for an apparently innocuous
bug.
Bug #36540 CREATE EVENT and ALTER EVENT
statements fail with large server_id.
This is a usability bug, that makes the DBA life unnecessarily
hard. The reason for having a large server_id is because a DBA
might want to use the IP address as server ID, to make sure that
there are unique IDs, and to have an easy way of identifying the
server through the IP.
All is well until you mix the server_id assignment with event
creation:
select version();
+-----------+
| version() |
+-----------+
| 5.1.32 |
+-----------+
1 row in set (0.00 sec)
set global server_id =inet_aton('192.168.2.55');
Query OK, 0 rows affected (0.00 sec)
select @@server_id;
+-------------+
| @@server_id |
+-------------+
| …
[Read more]
It’s a fairly simple rule, and something that should be obeyed for your health and sanity.
There are a couple of bugs which you could run into, when quoting large numbers. First of all, Bug #34384. This is concerning quoting large INTs in the WHERE condition of an UPDATE or DELETE. It seems that this will cause a table scan, which is going to be slooooow on big tables.
Similarly, there is the more recently discovered Bug #43319. You can run into this if you quote large INTs in the IN clause of a SELECT … WHERE. For example:
mysql> EXPLAIN SELECT * FROM a WHERE a IN('9999999999999999999999')\G *************************** 1. row *************************** id: 1 select_type: SIMPLE TABLE: NULL type: NULL possible_keys: NULL KEY: NULL key_len: NULL ref: …[Read more]
At a customer yesterday, I confirmed what Jonas
suspected and what is probably related to bug 42474. Scroll down and look for the output of
SHOW PROCESSLIST
.
It seems that TRIGGERs
causes Table Locks to be
taken when used with Ndb cluster tables!! I have created a
bug report .
If we do an update on a table that has an update trigger the
trigger will upon execution lock the entire table that is
affected by the trigger.
This was verified by having several threads updating random
records in a table. When one update gets to execute, the trigger
will block the other updates from happening. This was shown using
SHOW FULL PROCESSLIST
¸ which shows a bunch of
statements being in …
Recently my attention was brought to this bug which is a nightmare bug for any consultant.
Working with production systems we assume reads are reads and if we're just reading we can't break anything. OK may be we can crash the server with some select query which runs into some bug but not cause the data loss.
This case teaches us things can be different - reads can in fact cause certain writes (updates) inside which add risk, such as exposed by this bug.
This is why transparency is important - to understand how safe something is it is not enough to know what is this logically but also what really happens inside and so what can go wrong.
Entry posted by peter | 19 comments
Add to: …
[Read more]
Enough of one-sided stories. Let's see a different angle of MySQL 5.1. First, let me thank my colleague Chris Powers for taking a stand in defense of the management. But saying "everyone does so" is not a good explanation. The truth is much more complex and requires some narrative. |
MySQL 5.1 didn't start on the right foot. The effort to produce its features was underestimated, mostly because, at the time when it was designed, the company was still unearthing the architectural bugs that were haunting MySQL 5.0.
MySQL 5.0 was GA in October 2005. One month later, MySQL 5.1 started its alpha stage, while a rain of bugs fell …
[Read more]With all due respect to Monty (and I mean that — much respect is due), I have some serious issues with his portrayal of the 5.1 release. I hate to make my first entry on Planet MySQL about a controversy, but he encouraged people to blog about their experience with 5.1, so that’s what I’ll do here.
Overall Quality
As a long time user, I am very confident that the quality of 5.1 GA far exceeds that of the initial 5.0 GA release (5.0.15). In fact, I would go further and suggest that the MySQL organization has if anything been too conservative about declaring 5.1 GA.
It’s obviously true that there are still many bugs open. However no software is bug free, especially not those with codebase as large as MySQL. So the question is not if they are bug free, but are the …
[Read more]
|
When I filed Bug#39197 replication breaks with large load with InnoDB, flush logs, and slave stop/start, I genuinely thought that it was a serious problem. I was a bit puzzled, to tell the truth, because the scenario that I was using seemed common enough for this bug to be found already. |
Anyway, it was verified independently, but there was a catch. The
script in the master was using SET
storage_engine=InnoDB
to create the tables necessary for
the test. That looked good enough to me. The script was indeed
creating InnoDB tables on the master. The trouble was that the
"SET" command is not replicated. Thus the …
What bug makes you to recommend upgrading most frequently ? For me it is this bug which makes it quite painful to automate various replication tasks.
It is not the most critical bug by far but this makes it worse - critical bugs would usually cause upgrades already or were worked around while such stuff as causing things like "sometimes my slave clone script does not work" may hang on for years.
Entry posted by peter | One comment
[Read more]How would you expect AUTO_INCREMENT to work with MERGE tables ? Assuming INSERT_METHOD=LAST is used I would expect it to work same as in case insertion happens to the last table... which does not seems to be the case. Alternatively I would expect AUTO_INCREMENT to be based off the maximum value across all tables, respecting AUTO_INCREMENT set for the Merge Table itself. Neither of these expectations really true:
PLAIN TEXT SQL:
- mysql> CREATE TABLE a1(i int UNSIGNED NOT NULL AUTO_INCREMENT PRIMARY KEY);
- Query OK, 0 rows affected (0.01 sec)
- mysql> CREATE TABLE a2 LIKE a1;
- Query OK, 0 rows affected (0.00 sec)
- mysql> INSERT INTO a1 VALUES(2);
- Query OK, 1 row affected (0.00 …