I’ve recently become supremely disappointed in the availability of Nagios checks for RAID cards. Too often, I see administrators rely on chance (or their hosting provider) to discover failed drives, a dying BBU, or a degrading capacity on their RAID cards. So I began work on check_raid (part of check_mysql_all) to provide a suite of [...]
The linux.conf.au organisers have given all miniconfs an additional few weeks to spruik for more proposal submissions, huzzah!
So if you didn’t submit a proposal because you weren’t sure whether you’d be able to attend LCA2010, you now have until October 23 to convince your boss to send you and get your proposal in.
During preparation of Percona-XtraDB template to run in RightScale environment, I noticed that IO performance on EBS volume in EC2 cloud is not quite perfect. So I have spent some time benchmarking volumes. Interesting part with EBS volumes is that you see it as device in your OS, so you can easily make software RAID from several volumes.
So I created 4 volumes ( I used m.large instance), and made:
RAID0 on 2 volumes as:
mdadm -C /dev/md0 --chunk=256 -n 2 -l 0 /dev/sdj
RAID0 on 4 volumes as:
mdadm -C /dev/md0 --chunk=256 -n 4 -l 0 /dev/sdj /dev/sdk
RAID5 on 3 volumes as:
mdadm -C /dev/md0 --chunk=256 -n 3 -l 5 /dev/sdj /dev/sdk
RAID10 on 4 volumes in two steps:
mdadm -v --create /dev/md0 --chunk=256 --level=raid1
Only a little over a week before the User conference and I am still burning the midnight oil to get as much information for my presentations as possible. I thought I would tease you a bit here. What do you get when you put 4 Intel X-25E’s ( Sun branded) SSD’s running RAID10 in a Sun 4450 and run the sysbench fileio test on it?
NO CTL, NO DRIVE
NO CTL, W DRIVE
W CTL, NO DRIVE
W CTL, W DRIVE
NO CTL, NO DRIVE
This is part 3 in my RAMSan Series.
While I am confident the read-only test was a reasonably good test ( I just needed to push more ), my mixed load test was marred by issues. It was really a quick attempt to get a heavy read/write workload. I ran into issues with how I wrote this so I will spare you the details. Some flash devices are notoriously poor performing in writes, so its important to at least briefly look at this. What I will share are the IOPS & latency numbers from this test. The mixed workload does updates & selects at this point, these are a mix of PK updates, secondary index updates, etc. These typically are built to run faster and smaller the the read-only IO bound workload.
By the 11th interval the Ramsan was pretty much complete. The peaks are whats interesting… lets look at this in a slightly different way.
So in the admittedly flawed mixed …[Read more]
This is part 2 of My RAMSan Series.
In my normal suite of benchmarks I typically run dbt2 & sysbench oltp benchmarks next… and I did run then, but to be honest they just weren’t that interesting. They showed an improvement over my intel ssd results I ran on frankenmatt, but it was difficult to provide an apples to apples comparison. The server hardware was way different ( cpu, memory, controller, etc ). Plus I typically run a test -vs- non-flash then a test with flash, and ran tests with varying degrees of memory… the test box had 2GB of memory and sparse internal disk, so my normal test cycles were already in jeopardy. For what I ran I was pushing CPU limits long before I was hitting the IOPS I saw above. In fact in a 100W test I ended up peaking @ 1200 iops, while the CPU was @ 100%.
The challenge is building an effective solution that will easily maximize …[Read more]
Well its about time I posted this:) This is part 1 of 3 in my Ramsan series.
For those who have paid attention to my blog, know I love talking IO! I also love performance. Absolutely love it. Love disk, disk capactiy, io performance, solid state.. So as I march towards my UC session on MySQL Performance on Solid State Disk my goal is to try and test as many high end solid state disk systems as possible. All the vendors have been great, giving me access to some really expensive and impressive toys. I finished up testing Texas Memory System’s flash appliance the RamSAN 500 this week and wanted to post some numbers and some thoughts. TMS makes RamSAN appliances that merge disk and RAM into a really fast SANS. First I go a ways back with TMS, I deployed and Oracle Rac installation on one of their …[Read more]
Seeing my recent love affair with solid state drives I thought I would test drive one of the latest greatest drives out their the 80GB intel x-25m80GB. Like a child on Christmas morning, I felt true excitement as the generic UPS envelop arrived on my porch today.
While it did not show up until late in the day, I can’t just let it sit their without starting to test it can I?
Benchmarks are running as I write this and I will provide the full breakdown of the drives performance as I finish up the tests.
But to wet your appetite, check this out:
50-50 read/write sysbench test: 1899 IO requests per second!!! Thats huge!!!
Thats compare to the 284 IOPS I got on the memoright GT, a performance improvement of 6.6x, with a higher capacity 80GB -vs- 32GB, and a Lower cost $773 -vs- $680… outstanding!!!
Here are the first unverified sysbench test runs:
Pimp My Drive by Richard and Barb
There’s remarkably little information online about using MySQL on ZFS, successfully or not, so I did what any enterprising geek would do: Built a box, threw some data on it, and tossed it into production to see if it would sink or swim.[Read more]
In a lot of environments.
Peter gives a nice overview why you don't always need to invest in big fat redundant hardware.
We've tackled the topic last year already ..
Now I often get weird looks when I dare to mention that Raid is obsolete ..people fail to hear the "in a lot of environments"
Obviously the catch is in the second part, you won't be doing this for your small shop around the corner with just one machine. You'll only be doing this in an environment where you can work with a redundant array of inexpensive disks. Not with a server that has to sit in a remote and isolated location.
Next to that there are situations where you will be using raid, but not for redundancy, but for disk throughput.