Showing entries 23046 to 23055 of 44106
« 10 Newer Entries | 10 Older Entries »
distributed pushed down joins - more features

update on latest accomplishments:

  1. added support for filters (compare engine_condition_pushdown) on non-root operations, this in 2 flavors:
    • - constant/immediate filters that is provided after NdbQuery has been defined, these may not contains filter conditions referencing other NdbQueryOperation's,
    • - parameterized/linked filters, these type of programs must be provided when building the NdbQuery-object.

    The constant filters should be "easy" to integrate with ndbapi/mysqld, but the parameterized/linked is harder as we need to add new features to NdbScanFilter.

    Once this is integrated into mysqld, it will provide better performance for already pushable queries, see page 43 in my UC presentation (ref: filter).

  2. added support for NdbQueryOpertation's referencing non-direct parent …
[Read more]
Debugging problems with row based replication

MySQL 5.1 introduces row based binary logging.  In fact, the default binary logging format in GA versions of MySQL 5.1 is 'MIXED' STATEMENT*;   The binlog_format  variable can still be changed per sessions which means it is possible that some of your binary log entries will be written in a row-based fashion instead of the actual statement which changed data, even when the global setting on the master is to write binary logs in statement mode.   The row-based format does offer advantages particularly if triggers or stored procedures are used, or if non deterministic functions like RAND() are used in DML statements.

A statement based replication slave can get out of sync with the master fairly easily, especially if data is changed on the slave.   It is possible for a statement to execute successfully on a slave even if the data is not 100% in sync, so MySQL doesn't know anything is wrong.  This isn't the case …

[Read more]
Wishing I could be at ODTUG

Ronald asked me if I could present at ODTUG’s Kaleidoscope conference, which is only a couple hours from me, but I’ll be at the Netways OSDC that week. Matt Yonkovit will be there representing Percona. I wish I could go: I would really like to mingle with more Oracle users and developers, and I think that participation is the key to building relationships between MySQL and Oracle users – two groups of people who are going to be overlapping more in the future.

Using Aspersa to capture diagnostic data

I frequently encounter MySQL servers with intermittent problems that don’t happen when I’m watching the server. Gathering good diagnostic data when the problem happens is a must. Aspersa includes two utilities to make this easier. The first is called ‘stalk’. It would be called ‘watch’ but that’s already a name of a standard Unix utility. It simply watches for a condition to happen and fires off the second utility. This second utility does most of the work.

Chrome Checker

I’ve been back on Chrome pretty much full-time, especially since I figured out some proxy stuff, so the new After the Deadline checker for Google Chrome is a lifesaver. See also: Download Squad.

Open Source Bridge Database Sessions

Open Source Bridge, the “conference for open source citizens,” is right around the corner! The sessions were just announced and it’s going to be packed with quite a variety of really interesting talks. From open cloud computing topics to hardware hacking to language hacks (like HipHop from Facebook), I’m really looking forward to being there (I’m helping organize the event, but hopefully I’ll have time to attend sessions as well).

I wanted to point out a few of the great database talks:

[Read more]
O’Reilly speaks to Kurt von Finck

At the recent O’Reilly MySQL Conference & Expo 2010, the nice folk at O’Reilly Media got to speak to Kurt von Finck, Chief Community and Communications Officer for Monty Program Ab, about how we handle releases, why we are a superset of MySQL and more. Watch the 7 minute video, and do give us some feedback here in the comments.


Book Review : Pentaho 3.2 Data Integration

Dear Kettle fans,

A few weeks ago, when I was stuck in the US after the MySQL User Conference, a new book was published by Packt Publishing.

That all by itself is something that is not too remarkable.  However, this time it’s a book about my brainchild Kettle. That makes this book very special to me. The full title is Pentaho 3.2 Data Integration : Beginner’s Guide (Amazon, Packt).  The title all by itself explains the purpose of this book: give the reader a quick-start when it comes to Pentaho Data Integration (Kettle).

The author María Carina Roldán ( …

[Read more]
How much memory Innodb Dictionary can take ?

The amount of memory Innodb will require for its data dictionary depends on amount of tables you have as well as number of fields and indexes. Innodb allocates this memory once table is accessed and keeps until server is shut down. In XtraDB we have an option to restrict that limit.

So how much memory can it really take ? Here is some production stats from real system:

PLAIN TEXT SQL:

  1. mysql> SELECT count(*) FROM INNODB_SYS_TABLES;
  2. +----------+
  3. | count(*) |
  4. +----------+
  5. |    48246 |
  6. +----------+
  7. 1 row IN SET (8.04 sec)
  8.  
  9. mysql> SELECT count(*) FROM INNODB_SYS_INDEXES;
  10. +----------+
  11. | count(*) |
[Read more]
InfiniDB in the Cloud at Amazon Web Services (EC2)

Let's take a quick look at installing and running InfiniDB on EC2. The short list of commands are listed to create a m1.xlarge instance, install InfiniDB, create a 4 disk raid set, create an InfiniDB instance, and connect to the database.  A more detailed description follows that shows a bulk load example, joins, and new subqueries.


 ec2-run-instances ami-86db39ef -k gsg-keypair -g calpont2 -t m1....

Showing entries 23046 to 23055 of 44106
« 10 Newer Entries | 10 Older Entries »