Yesterday morning, I had an initial face-to-face meeting with a
prospective client. We met at the Victrola Cafe. He's putting
together a pretty neat Web2.0 custom app. My first task for him
will be writing a little process that monitors a particular
public database and writes changes into S3, to start capturing a
historical timeline. There is a great deal of additional AWS, DB,
and web client programming work I am qualified to do for
him.
Today, I pulled together a software developement contractor
contract, rewrote a bit of it to be aware of open source
licencing "stuff", and then emailed it off to him, along with my
rate and time estimate.
Google
Gears release is just plain nifty.
Jump just a little bit forward in time to Google making both
GMail and Google Calendar available. Suddenly you have offline
usage for two of their main products (and frankly this is what
might make me finally consider using their Calendar application,
which would be great for my friends since they could then finally
know when I am in town or not).
For the database world there are very practical applications in
synchronizing data sets to local storage so that users can either
do data entry locally for later storage, or business
intelligence.
We have been limited by max cookie size for a bit too long, this
really changes that.
One of my fears? …
A few weeks ago I noticed that the new version of Yahoo! Widgets/Konfabulator now supports an embedded client database, SQLite. This got the gears in my brain whirring -- what could you do with a desktop widget that sports a embedded database engine? Converting the Approver.com desktop widget into something more functional (maybe with the ability to replicate files from client to server) comes to mind, but there are tons of other things you could do.
The interesting new Google Gears product also rocks SQLite (as a way to facilitate the creation of offline web applications). Not to be outdone, Mike Chambers of Adobe blogged last night that the Apollo …
[Read more]People are always surprised to find out just how distributed Alfresco is as an organization. Aside from a small hive in London, it's hard to find more than two Alfrescans in the same city. The same is largely true of MySQL and a number of new open source companies (MuleSource comes to mind). At Alfresco in the US, we have people in Austin, Boston, San Francisco, Denver, Salt Lake City, Atlanta, and New York City. Even where we have people in the same cities (there are now four of us in Salt Lake City), we don't have offices and only... READ MORE
Erik Hoekstra, from Daisycon, has pointed out this problem
related to Replication in general and with a specific example on
MySQL Cluster and Replication in 5.1.
In the manual for 5.1 there's an entry about scripting the
failover for MySQL Cluster Replication.
In this part of the manual they speak about fetching the needed
variables, like the filename and position, and place them into
the CHANGE MASTER TO statement.
The example is here:
CHANGE MASTER TO
MASTER_LOG_FILE='@file',
MASTER_LOG_POS=@pos;
I'm now trying to do the following:
On a slave I've created a Federated tables, 1 pointing to the
current master, and 1
to the stand-in master, should the current master fail.
Federated table 1, let's say F1, is …
Today I published the DGCov tool on the MySQL Forge.
DGCov is a neat tool that I implemented last year for use
internally at MySQL, an old idea of Monty's.
The idea is to help developers to check that a new patch for the
server code has received adequate testing before pushing it to
the main tree. The GCC compiler has the gcov tool that can check
which lines of the source code were never executed even once. But
suppose you change a few 1000 lines across a big source base like
MySQL. Then GCov output is not all that useful, since it will
report tons of lines as not executed, and it is difficult to
manually check which of those lines were touched by your
patch.
The DGCov tool takes the GCov output and filters it so that it
only shows those lines that were touched by the patch being
checked. This output is immediately applicable to the work done …
Roland Bouman just gave me a call. He’s setting up a MySQL UDF repository, which sounds worth while joining. So I’ve decided to move the libmyxql project there. The lib will be renamed to lib_mysqludf_xql to follow the naming convention. You can find the repository on http://www.xcdsql.org/MySQL/UDF/index.html.
He also gave me a few good tips, including a way to get the name
from the column or alias to be used as tagname. This should make
the lib a bit more like SQL/XML. The downside is that the API
will change again, but it the end it will be much better,
so…
He also got his UDFs working for Windows, which I will implement
as well. So good news for all you Billy lovers out there.
More about this soon….
Looks like one-time hot VOIP company Vonage has got its work cut out for it. While the company was once considered disruptive, now it's got one foot in the grave due to mismanagement, excessive spending and a potentially fatal patent run-in with Verizon. The company has been losing money steadily and it's stock (NYSE:VG) cratered some months back wiping out $2 billion in market cap. The stock is trading in the single digits, now well below it's IPO price of $17. In fact the market cap seems to be hovering just over cash-on-hand. Easy come easy go.
The company recently sacked the CEO and is now looking to …
[Read more]
As of MySQL 5.1 get MySQL slow query log logged in mysql.slow_log
table instead of the file as you had in previous versions.
We rarely would use this feature as it is incompatible with our
slow query analyses patch and tools
Fixing this is not trivial while staying 100% compatible to
standard format as TIME type which is used to store query
execution time and lock time does not store fractions of the
second.
Today I've got some time to play with table based slow query log in production while tuning one of the systems. It is pretty nice to be able to work with data in SQL as it easy to filter all queries which happened within certain time interval (ie after I've done some changes) or filter out queries which you already looked at using LIKE statement.
As default table format for slow_log is CSV with no indexes …
[Read more]We’ve been playing with partitioning in MySQL 5.1.18 from the MySQL AB community builds and noticed that the daemon will dump core every 5 minutes or so when under load.
Recompiling from source fixes the problem. Anyone else notice stability problems?