This week's review comes as the nation comes to grips with the expanding scope of its worst environmental disaster in living memory, as the extent of the oil spill in the Gulf of Mexico becomes more clear. Despite the dire circumstances, the fact that I was able to stream President Barack Obama's first address to the nation from the Oval Office using the White House app on my iPhone as I walked home was a reminder of new ways government can use technology to share information. When I arrived home, I was able to stream the rest of the speech from WhiteHouse.gov/live, coupled with real-time press reaction on Twitter. And after the speech, I watched a real-time YouTube question and answer session with Press Secretary Robert Gibbs and White House new media director Macon …
[Read more]
The MySQL Cluster daemon for MySQL Cluster (ndbd and ndb_mgmd)
doesn't by themselves yet let them run as a service (apparently
ndb_mgmd does, but I haven't seen it documented anywhere on how
to do that). But there are ways to fix this, using some simple
Windows tools and some registry hacking.
What you need to find is the Windows Resource Kit from some
version of Windows that includes instsrv.exe and srvany.exe. It
is not too picky with the actual version of Windows you run it
seems, I used the Windows NT 32-bit versions of these on a 64-bit
Windows 7 box, and it works just fine.
These two programs are simple and are easy to use:
- instsrv allows you to install a service, it's real simple, just run the program and it will show the options (and these are few).
- srvany allows you to run any odd program, that is not intended run as a service, do do this anyway.
Now, Google a …
[Read more]
As MySQL Cluster is now available, and GA, on
Windows, maybe it's time for some NDB API coding on that
platform, right? The reason for this might, as it is in my case,
be that Windows is a pretty good GUI Desktop platform, and MySQL
Cluster / NDB really needs something like this. Those of you who
have followed and used Cluster for a while, might remember my
ndbtop tool that I created way back, and which is
a MySQL Cluster monitor for Linux using ncurses. This is still
useful I guess, but as far as a nice GUI presentation goes,
ncurses leaves a lot to be desiered, to say the least.
So where do we start on Windows then? Well, to be honest, MySQL
Cluster on Windows doesn't currently come with an installer, it's
just a .zip file to unpack. But we are only using NDBAPI and the
NDBMGMAPI, so that it no …
I am writing this blog post with Vim, my favorite editor,
instead of using the online editor offered by blogger. And I
am uploading this post to my Blogger account using Google
CL a tool that lets you use Google services from the
command line. I am a command line geek, and as soon as I saw the announcement, I installed it in my laptop. The mere fact that you are reading this blog post shows that it works. |
GoogleCL is an apparently simple application. If you install it
on Mac using macports you realize how many dependencies it has
and how much complexity it gives under the …
The Maatkit toolkit for MySQL has a lot of functionality that’s common across the tools. It’s not a good idea to document this in each tool’s man page, of course. So there is an overall maatkit man page. It explains concepts such as configuration file syntax. This and all the other Maatkit man pages are online.
Related posts:
- How PostgreSQL protects against partial page writes and data corruption
- Writing a book about Maatkit
- …
The Maatkit toolkit for MySQL has a lot of functionality that’s common across the tools. It’s not a good idea to document this in each tool’s man page, of course. So there is an overall maatkit man page. It explains concepts such as configuration file syntax. This and all the other Maatkit man pages are online.
Users of the latest HeidiSQL build file will find a new option
when rightclicking a data grid: "Copy selected rows as LaTeX
table". Same applies to the "Export grid data ..." which is
capable of storing rows in LaTeX format to a file.
Thanks to brampton for the patch!
Now there are 5 different text formats supported in grid exports:
CSV, HTML, XML, SQL and LaTeX. Probably you know some more
reasonable file formats to support?
Continuing in the theme from previous posts, I’d like to examine
another case where we can eliminate all disk seeks from a MySQL
operation and therefore get two orders-of-magnitude speedup. The
general outline of these posts is:
-
- B-trees do insertion disk seeks. While they’re at it, they
piggyback some other work on the disk seeks. This piggyback work
requires disk seeks regardless.
-
TokuDB’s Fractal Tree indexes don’t do
insertion disk seeks. If we also get rid of the piggyback work,
we end up with no disk seeks, and a two order of magnitude
improvement.
So it’s all about finding out which piggyback work is important (important enough to pay a huge performance penalty for), and which isn’t.
This blog post is about one of the most …
[Read more]I found I never published this post as it was sitting in my drafts few months now — it was written in 13th February, 2010. I’m publishing it without any changes.
I learn therefore I am!
I’ve just wrote few bits about learning a new technology and after skimming through my Google Reader, I noticed a great post by Chen Shapira — Deliberate Practice. That’s reminded me about another aspect of learning that I didn’t mention — learning is a continuous process.
There are two aspects…
- No matter how good I am and how much I know, my knowledge and expertize become outdated relatively quickly these days unless I keep up with the new stuff. Unfortunately, there is so much new …
Something that is great about PHP is that you can write code that
generates more PHP code to be used later. Now, I am not saying
this a best practice. I am sure it violates some rule in some
book somewhere. But, sometimes you need to be a rule
breaker.
A simple example is taking a database of configuration
information and dumping it to an array. We do this for each
publication we operate. We have a publication table. It contains
the name, base URL and other stuff that is specific to that
publication. But, why query the database for something that only
changes once in a blue moon? We could cache it, but that would
still require an on demand database hit. The easy solution is to
just dump the data to a PHP array and put it on disk.
<?php[Read more]
$sql = "select * from publications";
$res = $mysqli->query($sql);
while($row = $res->fetch_assoc()){
$pubs[$row["publication_id"]] = …