The huge power outage at 365, affecting such sites as craigslist.org and
Yelp, brings to
mind some important thoughts about redundancy and infrastructure.
Of the many sites at 365, including both new, interesting
startups and more mature sites, how many survived the power
outage well? More importantly, did they lose power on their
databases, and then did they lose any data?
It's easy to believe in your provider when they assure you of
uptime, redundant power, excellent cooling, and whatever else
they promise to get your business. But you really shouldn't, and
this is an example of why. You must have multiple sites!
Preferably geographically diverse (nothing hurts like an
earthquake or hurricane taking out your main data center and your
redundant data center at the same time).
MySQL, sadly, is not a durable database when …
Mike Olson pointed me to this excellent Wired article on the disappearance and search for noted database researcher Jim Gray. Jim is apparently the sort of developer that every company on the planet wanted to hire. At this point, no one wants him more than his family. Yet he's ...
One of the most popular keynotes of the MySQL Conference & Expo 2007 was called "The Clash of the DB Egos". It was a fight amongst seven database luminaries, all playing an important role either within MySQL AB or as providers of Storage Engines that work closely with MySQL. This article attempts at giving a picture of what the fight was about, through reciting the egos and the questions posed to them by the referee.
After several months I have again spent a little work in the MySQL Index Analyzer I first published back in August of 2006.
I added a feature that will find duplicate columns inside an index, caused by the internal appending of the InnoDb primary key columns to each secondary index.
To get the code and read more about the new feature, including an example, go to the MySQL Index Analyzer Blog.
I came up with the following trick in response to a question in
the #mysql channel on Freenode. A user needed to create a unique
identifier for multiple otherwise duplicate entries. Yes that was
bad schema design to begin with, but it was a fun challenge to
see if it could be resolved without scripting. And it can... it's
based on a known trick of numbering output rows. What's new is
restarting the counter for each group (name).
CREATE TABLE number (name CHAR(10), val INT DEFAULT 0);
INSERT INTO number (name)
VALUES ('foo'),('bar'),('foo'),('foo'),('bar');
SET @lastval=0, @lastuser='';
UPDATE number
SET val=(@lastval:=IF(name=@lastuser,@lastval+1,1)),
name=(@lastuser:=name)
ORDER BY name;
SELECT * FROM number ORDER BY name,val;
+------+------+
| name | val |
+------+------+
| bar | 1 |
| bar | 2 |
| foo | 1 |
| foo | 2 |
| foo | 3 |
+------+------+
SugarCRM to adopt the GPLv3. SiCortex secures $10m in venture debt. BMC opens up on OSS plans. (and more)
SugarCRM Open Source Project Announces Adoption of GPL v3 Free & Open Source Software (FOSS) License, SugarCRM (Press Release)
SiCortex Ramps up with $10 Million in Venture Debt, SiCortex (Press Release)
ITema Releases the First Enterprise Service Bus for PHP Developers, ITema (Press Release)
Entrust Contributes Essential PKI Technology Component to Open-Source Community, Entrust (Press Release)
…
[Read more]After several months I have again spent a little work in the MySQL Index Analyzer I first published back in August of 2006.
I added a feature that will find duplicate columns inside an index, caused by the internal appending of the InnoDb primary key columns to each secondary index.
To get the code and read more about the new feature, including an example, go to the MySQL Index Analyzer Blog.
I have a small website that I've built for my wife who is a realtor. It's a simple site that tracks properties, showings, etc and it was built using ASP.Net and Sql Server. Recently I decided to move it to Connector/Net and MySQL 5.1 so I needed a way to migrate the data. Enter the MySQL Migration Toolkit.
This is a terrific tool that can migrate data from various databases into MySQL. My SQL Server instance was setup to support mixed mode authentication and I had enabled the SQL Server Browser service and the TCP/IP protocol so I didn't anticipate any problems. I was almost right.
The one thing that tripped me up was that I was using a named instance with SQL Server. My instance was named SQLExpress so the hostname of the instance was .\SQLExpress. The problem is that the JDBC driver that the toolkit uses doesn't accept that as a hostname. To get it to work you have to enter a …
[Read more]Listening to Josh Berkus presentation on OSCON today I decided to take a closer look at SpecJAppServer benchmarks results which were published by PostgreSQL recently and which as Josh Puts it "This publication shows that a properly tuned PostgreSQL is not only as fast or faster than MySQL, but almost as fast as Oracle (since the hardware platforms are different, it's hard to compare directly)."
If you look at Benchmark Results List you would see MySQL Scores 720.56 and PostgreSQL scores 778.14 JOPS on 12 cores. This seems to show PostgreSQL is some 10% faster, from the glance view.
If you take a closer look you however would notice hardware is different - MySQL benchmark use Sun Fire X4100 available in Nov 2005 …
[Read more]While MySQL AB CEO Mickos is in no hurry to adopt the GPLv3 (see blog post), SugarCRM just announced (press release) at OSBC 2007 that it would adopting the GPLv3 as a replacement to its Sugar Public License, a variant of the Mozilla Public License with an attribution clause. This change applies the company’s release of Sugar Community Edition 5.0 and future releases. Does this mean that SugarCRM is abandoning the attribution clause within the new license? Nope. The GPLv3 allows for the inclusion of an attribution clause in the appendices.
Why would SugarCRM pursue this move? It may see this move as an opportuity to align itself more closely with Open Souce Initiative (OSI). SugarCRM has been taking a …
[Read more]