Welcome to the 72nd edition of Log Buffer, the weekly review of database
blogs.
Oracle OpenWorld (OOW) is over, and Lucas Jellema of the AMIS Technology
blog notes the OOW
Content Catalog has been updated with most of the presentations
available for download.
On his way home from OOW, Chris Muir of the appropriately titled
One Size Doesn’t Fit All blog notes how OOW and
the Australian Oracle User Group Conference and OOW compare with
regards to 99% fewer attendees in AUSOUG Perth conference - from 45k down to
350. …
Cesar Cerrudo of Argeniss Information Security has put out a new whitepaper (.pdf format), Data0: Next generation malware for stealing databases, describing how malware could be crafted to steal information out of databases. For the most part, it stays at a high-level, however, Cesar does give a few example queries (for SQL Server), the appropriate API calls to perform certain operations, etc., which delve a bit more into the technical side, but even these are fairly straight-forward. To demonstrate what he talks about in the whitepaper, he built a simple proof of concept (PoC), but based on what's in the whitepaper (and what is generally accepted as what's possible), nothing seemed outlandish or hard-to-do. Just for those worried about that PoC being …
[Read more]Marten Mickos once (mildly) complained about the Open Source Business Conference, suggesting that it was good for vendors but needed more customers. I heard the same thing from Red Hat and other would-be sponsors. Back then, of course, the market wasn't buying as much open source as it was selling.
My what a difference three years makes. Last year we had attendees from MIT, Christian Science Monitor, AllianceBernstein, E*Trade, H&R Block, Sony, Boise Cascade, and many others. This year that IT contingent keeps swelling.
I'm starting to get really excited about the upcoming OSBC. The website doesn't yet show it, but we're quickly pulling together the best assemblage of open-source firepower on the planet. It turns out that there are a lot of people qualified to speak on the topic of "Putting Open Source to Work," OSBC 2008's theme.
Here are a few of the …
[Read more]Recently I had a case with a web server farm where a random node went down every few minutes. I don't mean any of them rebooted except once or twice, but rather they were slowing down so much that practically stopped serving any requests and were being pulled out from the LVS cluster. The traffic was not any different than usual, all other elements of the system worked perfectly fine (e.g. databases, storage), no one started any backup in the middle of the day as it happens sometimes... so what was happening?
First I am going to describe the setup a little bit. As I already mentioned it was about web servers. Each of them was running Lighttpd that handled the requests coming from the internet. It was configured however only to serve static content, such as images. The requests asking for PHP files were passed down with proxy module to Apache listening on another TCP port.
And so I started investigating the problem. As it turned out …
[Read more]Just a short blog entry about a funny error message I’ve got while trying to activate a physical standby database: SQL> alter database recover managed standby database finish skip standby logfile; alter database recover managed standby database finish skip standby logfile * ERROR at line 1: ORA-00283: recovery session canceled due to errors ORA-01110: data file 1: '/oradata/stage/datafile/system_01.dbf' ORA-01122: database file 1 [...]
Again I have a well neglected and documented feature of MySQL. As we all often need and use locks in MySQL, we tend to forget (or not bother about) MySQL internals and how they cause trouble. For example, try something like LOCK TABLE ... WRITE on an InnoDB table in a transaction and see the same transaction getting timed out while waiting for a lock on one of the rows, Ref: [Bug 5998]. All these problems occur when we have difference in semantics of statement at MySQL and Engine levels. But recently, we figured out this good technique of keeping the logic with ourselves and not relying on MySQL too much. Though even this technique is not fool proof in all the cases.
The secret is: use GET_LOCK function of MySQL. GET_LOCK(str, timeout) function tries to get an exclusive lock with a name (str) using timeout seconds. The return values are tri-state, …
[Read more]I had many problems with the wordpress hosting, I must admit that most of them come from the fact that I don’t want to pay… Anyway, Ronald Bradford, a colleague at MySQL, offered me to host my blog on a server he manages so I switched. Now I can edit my style sheet and upload files, this should helps a lot.
Thanks a lot Ronald!
Yves
My RSS reader just spit out an article on Google creating its own
switches:
http://www.nyquistcapital.com/2007/11/16/googles-secret-10gbe-switch/
I've been hearing rumors about this for a while, and I am left
completed unsurprised by it. From hacking with the Linksys
WRTG-54 for the last couple of years I have realized that I
wanted to do something similar. While I have a nice HP switch
that I am using, it pales in comparison feature wise to what I
have in the DD-WRT units that I use. I would like to replace the
HP switch with a new switch running DD-WRT but I do not see that
happening anytime soon. Trying to get 48 ports into a single
piece of commodity hardware just does not seem to be viable. No
one builds the hardware, and I am not interested in doing this
with multiple computers. To make it really interesting for me, I …
Just in time of the holidays, Solid has released solidDB for MySQL 5.1 Beta. This release implements the solidDB storage engine as a dynamically loading plugin. Check it out at http://dev.soliddb.com/download
Happy Holiday Hacking!
I started this as a response to Keith Murphy’s post at http://www.paragon-cs.com/wordpress/?p=54, but it got long, so it deserves its own post. The basic context is figuring out how not to cause duplicate information if a large INSERT statement fails before finishing.
Firstly, the surefire way to make sure there are no duplicates if you have a unique (or primary) key is to use INSERT IGNORE INTO.
Secondly, I just experimented with adding an index to an InnoDB table that had 1 million rows, and here’s what I got (please note, this is one experience only, the plural of “anecdote” is *not* “data”; also I did this in this particular order, so there may have been caching taking place):
Way #1:
- ALTER the table to add the new index. This was the slowest
method, taking over 13 minutes.
Way #2:
- CREATE a new table with …