Some of you might know that I run a little website on my few
spare hours (actually, I have several sites, but I have one that
takes up some 95% of all the time I spend on these sites). The
site is called PapaBlues and if you pop by some time, and you
have seen it before, then you realize that there has been a very
major restructuring. The old site is all gone, and the thing is
now built using the Joomla CMS as the framework, whereas the old site
was using homebuild PHP, HTML and SQL in a bit of a mess, like so
many other sites.
I took the new site live just before Christmas, and as MySQL 5.1
had just been declared GA, I decided to use this for my site. I
have to say that I am very happy with it so far, stable and
performant, and with a some new useful features, I will write
more about these in a later blogpost.
…
This is actually old news, but I never thought to file a bug report (until now) or say anything to anyone about it. If you use mysqldump to dump and restore a MySQL table that has INSERT triggers, you can get different data in your restored database than you had when you dumped. [...]
A collague at Sun asked me for tips on how to tune MySQL to get fast bulk loads of csv files. (The use case being, insert data into a data warehouse during a nightly load window.) Considering that I spend most of my time working with MySQL Cluster, I was amazed at how many tips I could already come up with both for MyISAM and InnoDB. So I thought it might be interesting to share, and also: Do you have any more tips to add?
[A Sun partner] have requested to do a POC to test mysql's bulk
loading capabilities from CSV files. They have about 20GB of
compressed CSV files, and they want to see how long it takes to
load them.
They haven't specified which storage engine they intend to use
yet.
Good start, well defined PoC. The storage engine question actually is significant here.
- MyISAM typically may be up to twice as fast for bulk loads compared to InnoDB. (But there may be some tuning that makes …
[Read more]A collague at Sun asked me for tips on how to tune MySQL to get fast bulk loads of csv files. (The use case being, insert data into a data warehouse during a nightly load window.) Considering that I spend most of my time working with MySQL Cluster, I was amazed at how many tips I could already come up with both for MyISAM and InnoDB. So I thought it might be interesting to share, and also: Do you have any more tips to add?
[A Sun partner] have requested to do a POC to test mysql's bulk
loading capabilities from CSV files. They have about 20GB of
compressed CSV files, and they want to see how long it takes to
load them.
They haven't specified which storage engine they intend to use
yet.
Good start, well defined PoC. The storage engine question actually is significant here.
- MyISAM typically may be up to twice as fast for bulk loads compared to InnoDB. (But there may be some tuning that makes …
[Read more]umm…..
int Field_timestamp::store(double nr) { int error= 0; if (nr < 0 || nr > 99991231235959.0) { set_datetime_warning(DRIZZLE_ERROR::WARN_LEVEL_WARN, ER_WARN_DATA_OUT_OF_RANGE, nr, DRIZZLE_TIMESTAMP_DATETIME); nr= 0; // Avoid overflow on buff error= 1; } error|= Field_timestamp::store((int64_t) rint(nr), false); return error; }
(likely the same in mysql as well… haven’t checked though). these date and time things scare me.
With a new streamlined interface designed to help improve workflow, this update to FileMaker Pro is a must have for all Mac users who work extensively with databases.
When you write good SQL, that use indexes properly there is one
more obstacle that can slow down your app. The mySQL optimizer.
From versions 3.23 to 5.1 the optimizer has been a problem for
me. In mySQL 6.0 SUN/mySQL has resources improving it.
I wrote a post detailing how to pick indexes to get the most out
of mySQL here.
Here is a post about the mySQL optimizer and
what you can do to speed up your SQL SELECT statements.
What I would like to share with you today, is that UPDATE and
DELETE statements can also use optimizer tricks that SELECT uses.
Its not documented on the mysql.com but it is possible to do
something like
[Read more]
UPDATE [YOUR TABLE] USE …
Mondrian is generally very smart in how it chooses
to implement queries. Over the last month or so, I have learned
some lessons about how hard can be to make Mondrian
smarter.
As a ROLAP engine (I prefer to call it 'ROLAP with
caching'), Mondrian's evaluation strategy has always been a blend
of in-memory processing, caching, and native SQL execution.
Naturally there is always SQL involved, because Mondrian doesn't
store any of its own data, but the question is how much of the
processing Mondrian pushes down to the DBMS and how much it does
itself, based on data in its cache.
The trends are towards native SQL execution. Data volumes are
growing across the board, Mondrian is being deployed to larger
enterprises with large data sets (in some cases displacing more
established, and expensive, engines). …
This is actually old news, but I never thought to file a bug report (until now) or say anything to anyone about it. If you use mysqldump to dump and restore a MySQL table that has INSERT triggers, you can get different data in your restored database than you had when you dumped. The problem? The tool dumps the triggers before the data, so they get added back to the table before the rows are inserted.
Arun has a nice set of simple steps for
getting MySQL 6.x working with NetBeans 6.5