- Brown Cloud Marketing -- advertorial "interviewing" GM of a company offering "DNS in the cloud". This might be a worthwhile service, but the way he markets it (by saying open source is "freeware" and the market leader is "legacy") reveals a rich vein of bozo. Freeware legacy DNS is the internet's dirty little secret (actually, it's the reason we have a functioning DNS), Nominum software was written 100 percent from the ground up, and by having software with source code that is not open for everybody to look at, it is inherently more secure. (security through obscurity is equating clothing with being naked yet blind). The Internet kindly did the poor man's homework: screenshot of a cross-site scripting …
CodePlex, patents and Linux code. An interesting few days for Microsoft open source.
Follow 451 CAOS Links live @caostheory on Twitter and
Identi.ca
“Tracking the open source news wires, so you don’t have
to.”
CodePlex, CodePlex, CodePlex!
Microsoft launched the CodePlex Foundation to facilitate
open source contributions, and confirmed the departure of Sam Ramji.
Patents, Patents, Patents!
The OIN confirmed the acquisition of 22 patents formerly
owned by …
Cloud computing is disrupting many aspects of computing. One need only witness the manner in which online applications like Google Docs and Salesforce.com are disrupting entrenched competitors. Soon, cloud computing will significantly disrupt the database market, for the reasons explained below.
One of the most powerful arguments in technology is the price/performance ratio. Significant declines in price or significant increases in performance can result in disruption. When you get both price declines and performance increases, you get significant disruption. This is exactly what is coming to the database market.
The Past
Moore’s Law enabled the CPU to process data faster than the hard
disk drive could get the data to the CPU. Because getting data to
the CPU was the bottleneck, the database that solved that
bottleneck would have a performance advantage.
The shared-disk database had two glaring …
[Read more]Cloud computing is disrupting many aspects of computing. One need only witness the manner in which online applications like Google Docs and Salesforce.com are disrupting entrenched competitors. Soon, cloud computing will significantly disrupt the database market, for the reasons explained below.
One of the most powerful arguments in technology is the price/performance ratio. Significant declines in price or significant increases in performance can result in disruption. When you get both price declines and performance increases, you get significant disruption. This is exactly what is coming to the database market.
The Past
Moore’s Law enabled the CPU to process data faster than the hard
disk drive could get the data to the CPU. Because getting data to
the CPU was the bottleneck, the database that solved that
bottleneck would have a performance advantage.
The shared-disk database had two glaring …
[Read more]Intalio acquires Jetty. Red Hat updates JBoss platform. $12m funding for Medsphere. And more.
Follow 451 CAOS Links live @caostheory on Twitter and
Identi.ca
“Tracking the open source news wires, so you don’t have
to.”
# Intalio acquired Webtide, developer of Jetty application server.
# Red Hat delivered JBoss Enterprise Application Platform 5.0, as well as JBoss Operations Network (ON) 2.3 and launched Catalyst partner program.
# Medsphere raised $12m to support ongoing development and expansion in open source health IT.
…[Read more]|
Joyent and Sun have announced a highly tuned MySQL Accelerator that claims 2x-4x better performance than EC2 (but see comments). Joyent focuses on "Enterprise-Class Cloud Computing", with offerings on Public Cloud and the Private … |
The Joyent Accelerator for MySQL is apparently 2-4 times faster
than an EC2 instance, but there's no mention of configuration,
database size or even what the queries were in their record breaking performance tests.
I assume that Joyent has tuned their MySQL install, so it only
seems fair not to use the default configuration on the EC2 test.
If you look at Vadim's EBS benchmarks (particularly random
read/write) it looks like they may have a very good product, but
instead we're left with the impression that they have something
to hide.
1,245 transactions per second isn't very much these days if they …
So during preparation of XtraDB template for EC2 I wanted to understand what IO characteristics we can expect from EBS volume ( I am speaking about single volume, not RAID as in my previous post). Yasufumi did some benchmarks and pointed me on interesting behavior, there seems several level of caching on EBS volume.
Let me show you. I did sysbench random read IO benchmark on files with size from 256M to 5GB with step 256M. And, as Morgan pointed me, I previously made first write, to avoid first-write penalty:
dd if=/dev/zero of=/dev/sdk bs=1M
for reference script is:
PLAIN TEXT CODE:
- #!/bin/sh
- set -u
- set -x …
During preparation of Percona-XtraDB template to run in RightScale environment, I noticed that IO performance on EBS volume in EC2 cloud is not quite perfect. So I have spent some time benchmarking volumes. Interesting part with EBS volumes is that you see it as device in your OS, so you can easily make software RAID from several volumes.
So I created 4 volumes ( I used m.large instance), and made:
RAID0 on 2 volumes as:
mdadm -C /dev/md0 --chunk=256 -n 2 -l 0 /dev/sdj
/dev/sdk
RAID0 on 4 volumes as:
mdadm -C /dev/md0 --chunk=256 -n 4 -l 0 /dev/sdj /dev/sdk
/dev/sdl /dev/sdm
RAID5 on 3 volumes as:
mdadm -C /dev/md0 --chunk=256 -n 3 -l 5 /dev/sdj /dev/sdk
/dev/sdl
RAID10 on 4 volumes in two steps:
mdadm -v --create /dev/md0 --chunk=256 --level=raid1
--raid-devices=2 …
Of course it’s not quite that simple. I’ve just decomissioned an old Red Hat 7.1 box (hosted dedicated server) that had been in service since 2002, so about 7 years. Specs? Celeron 1.3GHz, 512M, 60GB HD. Not too bad in the RAM and disk realm. It did a good job but goodness am I glad to be rid of it!
Not having that box online is safer for the planet, although it (perhaps amazingly considering the age of some of the externally facing software components) has never been compromised – I consider that mostly luck, by the way, I’m not naive about that. But it’s not easy to move off old servers, it’s generally (and also has been in this case) a lot of work.
Of course hosting has moved on since 2002, places like Linode offer more for less money/month. Of course they virtualise (Xen based in this case) and that’s not been my favourite (particularly for DB servers but depending …
[Read more]