Home |  MySQL Buzz |  FAQ |  Feeds |  Submit your blog feed |  Feedback |  Archive |  Aggregate feed RSS 2.0 English Deutsch Español Français Italiano 日本語 Русский Português 中文
Showing entries 1 to 21

Displaying posts with tag: raid (reset)

Some LSI 9211-8i issues on Windows and Linux
+0 Vote Up -0Vote Down
tl;dr:
Make sure you flash an LSI-9211 to IT firmware rev#14 to get it to work 
with Linux and SSD trim.  You may have to downgrade from newer firmware
to older firmware to get the card to work.


Finding a SATA III controller with more than one PCI-e lane
After a recent hardware issue I decided to upgrade my computer to use new Intel 520 120MB SSD drives in RAID for improved performance.  The motherboard I use (an ASUS Rampage III extreme) has a Marvel SATA III controller with two ports, but I discovered that it is connected via only a single PCI-e lane (each lane can do at most 400MB/sec*).  This means that it can't effectively support even a single Intel 520 because one device can saturate the SATA III bus (An Intel 520 is rated at up to 550MB/sec sequential write).

So I went on a quest for a new SATA 3 controller.   To




  [Read more...]
How slow can SSD be or why is testing a new server performance important?
+3 Vote Up -0Vote Down

Recently we have helped our customer to migrate their entire application stack from one data center to another. Before we were brought on-board, customer had already placed an order for a new set of servers with the new hosting provider. All of them were suppose to be high-end systems – many CPU cores, plenty of RAM and RAID array build on top of SSD drives. As the new machines started being available to us, we began setting up the new environment. At some point it turned out that the new machines were actually slower compared to the several year old systems and their load was much higher under comparable traffic.

We examined several of the new servers and each time the conclusion was that the problems were related poor I/O performance. In the benchmarks a RAID 10 array

  [Read more...]
On SSDs – Lifespans, Health Measurement and RAID
+4 Vote Up -0Vote Down

Solid State Drive (SSD) have made it big and have made their way not only in desktop computing but also in mission-critical servers. SSDs have proved to be a break-through in IO performance and leave HDD far far behind in terms of Random IO performance. Random IO is what most of the database administrators would be concerned about as that is 90% of the IO pattern visible on database servers like MySQL. I have found Intel 520-series and Intel 910-series to be quite popular and they do give very good numbers in terms of Random IOPS. However, its not just performance that you should be concerned about, failure predictions and health gauges are also very important, as loss of data is a big NO-NO. There is a great deal of misconception about the endurance level of SSD, as its mostly

  [Read more...]
Should RAID 5 be used in a MySQL server?
+1 Vote Up -0Vote Down

Usually the answer should be “no!”. RAID level 5 is hardly ever a good choice for any database storage. It comes with a very high overhead as each write turns into a sequence of four physical I/O operations, two reads and two writes, in order not only to update a data block, but also to re-calculate and update the corresponding checksum block. The resulting penalty is not just slower writes. The extra operations mean the storage I/O capacity is reduced too.

Another disadvantage of using RAID 5 could be its very poor performance when it works in degraded mode. In such configuration a disk failure means some data was actually lost, but RAID 5 can rebuild the missing pieces on-the-fly as requests arrive. But reconstructing blocks is nowhere near as efficient as just reading them from disk.

In most cases using alternative

  [Read more...]
Setting up XFS on Hardware RAID — the simple edition
+2 Vote Up -0Vote Down

There are about a gazillion FAQs and HOWTOs out there that talk about XFS configuration, RAID IO alignment, and mount point options.  I wanted to try to put some of that information together in a condensed and simplified format that will work for the majority of use cases.  This is not meant to cover every single tuning option, but rather to cover the important bases in a simple and easy to understand way.

Let’s say you have a server with standard hardware RAID setup running conventional HDDs.

RAID setup

For the sake of simplicity you create one single RAID logical volume that covers all your available drives.  This is the easiest setup to configure and maintain and is the best choice for operability in the majority of normal configurations.  Are there ways to squeeze more performance out of a server by dividing the logical volumes: perhaps,

  [Read more...]
Green HDs and RAID Arrays
+1 Vote Up -1Vote Down
Some so-called “Green” harddisks don’t like being in a RAID array. These are primarily SATA drives, and they gain their green credentials by being able reduce their RPM when not in use, as well as other aggressive power management trickery. That’s all cool and in a way desirable – we want our hardware to use less power whenever possible! – but the time it takes some drives to “wake up” again is longer than a RAID setup is willing to tolerate. First of all, you may wonder why I bother with SATA disks at all for RAID. I’ve written about this before, but they simply deliver plenty for much less money. Higher RPM doesn’t necessarily help you for a db-related (random access) workload, and for tasks like backups which do have a lot of speed may not be a primary concern. SATA disks have a shorter command queue than SAS, so that means they might need  [Read more...]
HDlatency – now with quick option
+0 Vote Up -0Vote Down
I’ve done a minor update to the hdlatency tool (get it from Launchpad), it now has a –quick option to have it only do its tests with 16KB blocks rather than a whole range of sizes. This is much quicker, and 16KB is the InnoDB page size so it’s the most relevant for MySQL/MariaDB deployments. However, I didn’t just remove the other stuff, because it can be very helpful in tracking down problems and putting misconceptions to rest. On SANs (and local RAID of course) you have things like block sizes and stripe sizes, and opinions on what might be faster. Interestingly, the real world doesn’t always agree with the opinions. We Mark Callaghan correctly pointed out when I first published it, hdlatency does not provide anything new in terms of functionality, the db IO tests of sysbench cover it all. A key advantage of hdlatency is  [Read more...]
Apsersa’s summary tool supports Adaptec and MegaRAID controllers
+0 Vote Up -1Vote Down

I spent a little time yesterday doing some things with the “summary” tool from Aspersa. I added support for summarizing status and configuration of Adaptec and LSI MegaRAID controllers. I also figured out how to write a test suite for Bash scripts, so most major parts of the tool are fully tested now. I learned a lot more sed and awk this weekend.

There is really only one way to get status of Adaptec controllers (/usr/StorMan/arcconf), but the LSI controllers can be queried through multiple tools. I added support for MegaCli64, as long as it’s located in the usual place at /opt/MegaRAID/MegaCli/MegaCli64. I am looking for feedback and/or help on supporting other methods of getting status from the LSI controllers, such as megarc and omreport. If you can contribute

  [Read more...]
Using ext4 for MySQL
+0 Vote Up -0Vote Down

This week with a client I saw ext4 used for the first time on a production MySQL system which was running Ubuntu 9.10 (Karmic Koala). I observe today while installing 9.10 Server locally that ext4 is the default option. The ext4 filesystem is described as better performance, reliability and features while there is also information about improvements in journaling.

At OSCON 2009 I attended a presentation on Linux Filesystem Performance for Databases by Selena Deckelmann in which ext4 was included. While providing some improvements in sequential reading and writing, there were issue with random I/O which is the key for RDBMS products.

Is the RAID configuration (e.g. RAID 5, RAID 10), strip size,

  [Read more...]
Knowing your PERC 6/i BBU
+2 Vote Up -0Vote Down
I’ve recently become supremely disappointed in the availability of Nagios checks for RAID cards. Too often, I see administrators rely on chance (or their hosting provider) to discover failed drives, a dying BBU, or a degrading capacity on their RAID cards. So I began work on check_raid (part of check_mysql_all) to provide a suite of [...]
Storage Miniconf Deadline Extended!
+0 Vote Up -0Vote Down

The linux.conf.au organisers have given all miniconfs an additional few weeks to spruik for more proposal submissions, huzzah!

So if you didn’t submit a proposal because you weren’t sure whether you’d be able to attend LCA2010, you now have until October 23 to convince your boss to send you and get your proposal in.

EC2/EBS single and RAID volumes IO benchmark
+2 Vote Up -0Vote Down

During preparation of Percona-XtraDB template to run in RightScale environment, I noticed that IO performance on EBS volume in EC2 cloud is not quite perfect. So I have spent some time benchmarking volumes. Interesting part with EBS volumes is that you see it as device in your OS, so you can easily make software RAID from several volumes.

So I created 4 volumes ( I used m.large instance), and made:

RAID0 on 2 volumes as:
mdadm -C /dev/md0 --chunk=256 -n 2 -l 0 /dev/sdj /dev/sdk

RAID0 on 4 volumes as:
mdadm -C /dev/md0 --chunk=256 -n 4 -l 0 /dev/sdj /dev/sdk /dev/sdl /dev/sdm

RAID5 on 3 volumes as:
mdadm -C /dev/md0 --chunk=256 -n 3 -l 5 /dev/sdj /dev/sdk /dev/sdl

RAID10 on 4 volumes in two steps:

mdadm






  [Read more...]
Tease me, SUN SSD Benchmarks
+0 Vote Up -0Vote Down

Only a little over a week before the User conference and I am still burning the midnight oil to get as much information for my presentations as possible. I thought I would tease you a bit here. What do you get when you put 4 Intel X-25E’s ( Sun branded) SSD’s running RAID10 in a Sun 4450 and run the sysbench fileio test on it?

NO CTL, NO DRIVE
Hardware NO CTL, W DRIVE
Hardware W CTL, NO DRIVE
Hardware W CTL, W DRIVE
Hardware NO CTL, NO DRIVE
Software 50% Reads 3449.25 7744.36 2585.44 8656.63 3714.53 67% Reads 4460.67 8696.23 4169.18 9325.29 4646.03 75% Reads 5538.94 10016.72 5233.61 9942.23 5930.73 80% Reads 6886.81 12385.5 6194.27 10260.07 7378.55 83% Reads 7067.62 12895.92 6958.61 10247.36 7911.37 100% Reads 16608.69 16438.89 10814.62 11064.44 18050.14 100% Writes 1983.76 6469.18 1904.77 4328.93 1939.74

*NO CTL or W CTL : No






  [Read more...]
Testing Performance on a Texas Memory System RAMSAN-500 pt3
+0 Vote Up -0Vote Down

This is part 3 in my RAMSan Series.

While I am confident the read-only test was a reasonably good test ( I just needed to push more ), my mixed load test was marred by issues.  It was really a quick attempt to get a heavy read/write workload.  I ran into issues with how I wrote this so I will spare you the details.  Some flash devices are notoriously poor performing in writes, so its important to at least briefly look at this.  What I will share are the IOPS & latency numbers from this test.  The mixed workload does updates & selects at this point, these are a mix of PK updates, secondary index updates, etc.  These typically are built to run faster and smaller the the read-only IO bound workload.

By the 11th interval the Ramsan was pretty much complete.  The peaks are whats interesting…  lets look

  [Read more...]
Testing Performance on a Texas Memory System RAMSAN-500 pt2
+0 Vote Up -0Vote Down

This is part 2 of My RAMSan Series.

In my normal suite of benchmarks I typically run dbt2 & sysbench oltp benchmarks next…  and I did run then, but to be honest they just weren’t that interesting.  They showed an improvement over my intel ssd results I ran on frankenmatt,  but it was difficult to provide an apples to apples comparison.   The server hardware was way different ( cpu, memory, controller, etc ).  Plus I typically run a test -vs- non-flash then a test with flash, and ran tests with varying degrees of memory… the test box had 2GB of memory and sparse internal disk, so my normal test cycles were already in jeopardy.  For what I ran   I was pushing CPU limits long before I was hitting the IOPS I saw above.  In fact in a 100W test I ended up peaking @ 1200 iops, while the CPU was @ 100%.

The challenge is building an effective solution that will easily maximize

  [Read more...]
Testing Performance on a Texas Memory System RAMSAN-500
+0 Vote Up -0Vote Down

Well its about time I posted this:)  This is part 1 of 3 in my Ramsan series.

For those who have paid attention to my blog, know I love talking  IO!  I also love performance.  Absolutely love it.  Love disk, disk capactiy, io performance, solid state..  So as I march towards my UC session on MySQL Performance on Solid State Disk my goal is to try and test as many high end solid state disk systems as possible.  All the vendors have been great, giving me access to some really expensive and impressive toys.  I finished up testing Texas Memory System’s flash appliance the RamSAN 500 this week and wanted to post some numbers and some thoughts. TMS makes RamSAN appliances that merge disk and RAM into a really fast SANS.     First I go a ways back with TMS, I deployed and Oracle Rac installation on one of their RamSAN 300’s several

  [Read more...]
Intel x-25m80GB in the house…. woot!
+0 Vote Up -0Vote Down

Seeing my recent love affair with solid state drives I thought I would test drive one of the latest greatest drives out their the 80GB intel x-25m80GB.  Like a child on Christmas morning, I felt true excitement as the generic UPS envelop arrived on my porch today.

While it did not show up until late in the day, I can’t just let it sit their without starting to test it can I?

Benchmarks are running as I write this and I will provide the full breakdown of the drives performance as I finish up the tests.

But to wet your appetite, check this out:

50-50 read/write sysbench test:  1899 IO requests per second!!!  Thats huge!!!

Thats compare to the 284 IOPS I got on the memoright GT, a performance improvement of 6.6x, with a higher capacity 80GB -vs- 32GB, and a Lower cost $773 -vs- $680…  outstanding!!!

Here are the first unverified sysbench test runs:

      [Read more...]
    Success with OpenSolaris + ZFS + MySQL in production!
    +0 Vote Up -0Vote Down

    Pimp My Drive by Richard and Barb

    There’s remarkably little information online about using MySQL on ZFS, successfully or not, so I did what any enterprising geek would do: Built a box, threw some data on it, and tossed it into production to see if it would sink or swim.

    I’m a Linux geek, have been since 1993 (Slackware!). All of SmugMug’s datacenters (and

      [Read more...]
    Raid is obsolete
    +0 Vote Up -0Vote Down

    In a lot of environments.

    Peter gives a nice overview why you don't always need to invest in big fat redundant hardware.

    We've tackled the topic last year already ..

    Now I often get weird looks when I dare to mention that Raid is obsolete ..people fail to hear the "in a lot of environments"

    Obviously the catch is in the second part, you won't be doing this for your small shop around the corner with just one machine. You'll only be doing this in an environment where you can work with a redundant array of inexpensive disks. Not with a server that has to sit in a remote and isolated location.

    Next to that there are situations where you will be using raid, but not for redundancy, but for disk throughput.

    Innodb RAID performance on 5.1
    +0 Vote Up -0Vote Down

    I've been doing some benchmarking recently to satisfy the curiosity about 5.1's performance compared with 4.1.  The major question this time revolves around how much additional performance an external RAID array can provide (for us it's typically beyond the 6 drives a Dell 2950 can hold). 
    These tests are done on using an MSA-30 drive enclosure with 15k-SCSI drives.  The testing framework is sysbench oltp.  The test names are hopefully fairly obvious:  selects = single selects, reads = range tests, xacts = transaction tests, etc.   Transaction tests are counting individual queries, not transactions.   The "Rdm" tests are using a uniform distribution, whereas the non-'Rdm' tests are 75% of queries are using 10% of the rows.  

    -->

    read more

    Innodb RAID performance on 5.1
    +0 Vote Up -0Vote Down

    I've been doing some benchmarking recently to satisfy the curiosity about 5.1's performance compared with 4.1.  The major question this time revolves around how much additional performance an external RAID array can provide (for us it's typically beyond the 6 drives a Dell 2950 can hold). 
    These tests are done on using an MSA-30 drive enclosure with 15k-SCSI drives.  The testing framework is sysbench oltp.  The test names are hopefully fairly obvious:  selects = single selects, reads = range tests, xacts = transaction tests, etc.   Transaction tests are counting individual queries, not transactions.   The "Rdm" tests are using a uniform distribution, whereas the non-'Rdm' tests are 75% of queries are using 10% of the rows.

      [Read more...]
    Showing entries 1 to 21

    Planet MySQL © 1995, 2014, Oracle Corporation and/or its affiliates   Legal Policies | Your Privacy Rights | Terms of Use

    Content reproduced on this site is the property of the respective copyright holders. It is not reviewed in advance by Oracle and does not necessarily represent the opinion of Oracle or any other party.