Showing entries 33356 to 33365 of 44917
« 10 Newer Entries | 10 Older Entries »
FULLTEXT redux

In my post from yesterday, I ended up resorting to using multiple-column fulltext indexes to make it so I was actually using the indexes on the columns specified in the query. Well, that worked. But it also resulted in HUGE indexes!

Before change:

-rw-rw---- 1 mysql mysql 1177773692 2008-06-05 12:37 items_text.MYD
-rw-rw---- 1 mysql mysql 1136713728 2008-06-05 12:37 items_text.MYI



After adding more indexes:

-rw-rw---- 1 mysql mysql 1156516200 2008-06-04 17:14 items_text.MYD
-rw-rw---- 1 mysql mysql 1978787840 2008-06-04 17:14 items_text.MYI



Furthermore, this made the table much harder to update. Replication kept lagging last night (nagios was complaining loudly).

I've since reverted back to the way I had it, not using the index, which is the least worse of my …

[Read more]
response to comment with questions

1) Which operations can I perform during a table reorg?
everything except DDL and node restart
ndb does currently only allow one DDL at a time, and the reorg is a DDL
ndb does currently prevent node restart while DDL in ongoing

2) What happens to an ongoing table reorg during
2a) node failure
reorg will be completed or aborted depending on how long it has progressed
(i.e if commit has been started)
2b) cluster failure, and recovery?
reorg will be completed or aborted depending on how long it has progressed
(i.e if commit has been written)

The reorg is committed after rows have been copied, but before rows has been
deleted/cleaned up

3) How do my a) SQL b) NDBAPI applications have to be changed to cope with table reorg?

Not at all, but
- your application can "hint" incorrectly if it does not check table state …

[Read more]
Statement-based replication is disabled for Falcon

Contrary to what I said earlier, Falcon has decided to deliberately disable statement-based replication using the same capabilities mechanism that InnoDB uses.

The reason is that isolation between concurrent transactions cannot be guaranteed, meaning that two concurrent transactions are not guaranteed to be serializable (the result of a concurrent transaction that has committed can "leak" into an ongoing transaction). Since they are not serializable, it means they cannot be written to the binary log in an order that produce the same result on the slave as on the master.

However, when using row-based replication they are serializable, because whatever values are written to the tables are also written to the binary log, so if data "leaks" into an ongoing transaction, this is what is written to the binary log as …

[Read more]
in_array is quite slow

So, we had a cron job hanging for hours.  No idea why.  So, I started debugging.  It all came down to a call to in_array().  See, this job is importing data from a huge XML file into MySQL.  After it is done, we want to compare the data we just added/updated to the data in the table so we can deactivate any data we did not update.  We were using a mod_time field in mysql in the past.  But, that proved to be an issue when we wanted to start skipping rows from the XML that were present but unchanged.  Doing that saved a lot of MySQL writes and sped up the process.

So, anyhow, we have this huge array of ids accumulated during the import.  So, an in clause with 2 million parts would suck.  So, we suck back all the ids in the database that exist and stick that into an array.  We then compared the two arrays by looping one array and using in_array() to check if the value was in the …

[Read more]
Perspective: Addons and Community site

Along with evolving and documenting interface for writing Workbench plugins we think about community site which will make sharing and usage of plugins within community extremely easy. Conceptually it will be close to addons.mozilla.org. Users will be able to explore directory of published addons, read description and comments on them, quickly install and rate whatever they consider necessary. To install selected addon user will have just to drag’n'drop corresponding link to Workbench, and the rest will be done by Workbench. Every addon will be supplied with manifest file describing its version, dependencies on other addons (if any), files to be installed, menu items to be added, etc. Addon has wide definition, it could be a plugin (dynamic library), set of demo models, documentation, SQL scripts, whatever. Now things are nearly in the middle of development stage. If you have ideas on how to improve the process of …

[Read more]
Grouping .test files into suites

Please, enlighten me on what being in a separate suite means! I see there is a suite directory in mysql-test. Will the suites be run by default when I use 'mtr' to run tests? Or do I have to add them manually?
=========================================

The number of tests we have for MySQL Server are constantly growing. There is a need to group them in different ways so we can select what and where to run. We do this by using suites, either the default suite that we call "main" in mysql-test/t or one of the subdirs of mysql-test/suite.

As each test becomes more advanced it's also necessary to use different configurations for a particular test or suite. For example all the replication tests in suite/rpl need to be run with the server started in three different ways(three different configurations) to get full coverage. To avoid that the individual developer or the "one" running tests have to remember different …

[Read more]
How to concatenate strings in a mysqltest testcase

Hi Magnus, is there a way to concatenate strings in a mysqltest?
=================================================================


Yes, you can "easily" create more dynamic strings using let and a while loop. For example like this:


let $c= 254;
let $str= t255;

while ($c)
{
let $str= t$c,$str;
dec $c;
}
echo $str;



This will printout t1, t2, t3, ... t255.


You can then use the $str variable in an eval
to CREATE TABLE, VIEW or SELECT


eval CREATE TABLE t0 (a INT) ENGINE=MERGE UNION($str);
Memo: Binary Logging of MySQL from the viewpoint of storage engine
  • two formats: statement-based and row-based
    • can be mixed
    • 5.1 supports both
  • statent-based logs record UPDATE,INSERT,DELETE queries
  • row-based logs store internal buffers passed to `handler' class
  • storage engines may declare HA_HAS_OWN_BINLOGGING and write to binlog directly
    • however, it becomes impossible to log multitable updates
    • what happens if the storage engine supports transaction?
  • handling of auto_increment
    • when using statement-based logs, lock for auto_increment value should be held until a query completes
    • when using row-based logs, an auto_increment column can be updated and stored to log one row by row by directly updating ``uchar record[]''

For myself, since …

[Read more]
replicants

I found myself with some spare time the other day and decided that my current mysql backup strategy is not the best in the world. The mysql server is a virtual machine in a Brisbane datacenter and it's backed up via a script that calls mysqldump on each installed database and dumps the content to (compressed) files. These files then get sucked down via rdiff-backup.

This is fine in principle, but does mean it's possible for me to lose 24 hours worth of data due to an accidental '--; DROP table students.

A more ideal way would be for the remote sql server to replicate to a local one, on which I can run mysqldump more often without affecting web site performance. (Replication would replicate the DROP table statement too.. :-)

With a bit of a confluence of attending three days of OpenQuery mysql training and needing to regenerate all ssl keys, I thought I should …

[Read more]
replicants

I found myself with some spare time the other day and decided that my current mysql backup strategy is not the best in the world. The mysql server is a virtual machine in a Brisbane datacenter and it's backed up via a script that calls mysqldump on each installed database and dumps the content to (compressed) files. These files then get sucked down via rdiff-backup.

This is fine in principle, but does mean it's possible for me to lose 24 hours worth of data due to an accidental '--; DROP table students.

A more ideal way would be for the remote sql server to replicate to a local one, on which I can run mysqldump more often without affecting web site performance. (Replication would replicate the DROP table statement too.. :-)

With a bit of a confluence of attending three days of OpenQuery mysql training and needing to regenerate all ssl keys, I thought I should …

[Read more]
Showing entries 33356 to 33365 of 44917
« 10 Newer Entries | 10 Older Entries »