RAID 5 Is A Cruel Mistress

I’ve long been a fan of RAID 5. Since you only lose one disk worth of space to parity it has been the best way to maximize local disk space. Sure, the performance isn’t the greatest, but I haven’t had applications that taxed the local drives, and the disk space and generically decent performance has been a good trade off.

In the last six months, though, I’ve had three machines die from a double drive fault. This is the Achilles Heel of RAID 5. A single drive failure is as much as can be tolerated. In two of those cases the array had a hot spare drive, and the second drive faulted during the process of rebuilding on that drive.

This makes me wonder why I’ve gone ten years without any problems, just to be blindsided now.

One answer comes to mind: increased capacities of disks, leading to long array rebuild times.

Think of what happens when a drive fails in a RAID 5 array, especially on an older array that isn’t very busy. If there is a hot spare the controller starts rebuilding the array, which causes a lot of I/O. If there is a second drive that is questionable in the array this might push it over the edge. Before you are done rebuilding you get a second drive error. Game over.

So what do I do about this? Change RAID levels? RAID 0 is out. RAID 1 can’t handle a double drive failure. RAID 1+0 (10) might be able to handle a double drive failure if the two that fail are in the right places. Stick to smaller drives? With less capacity the rebuilds happen faster, helping to minimize the exposure. Use faster drives? Maybe switching from 10,000 RPM disks to 15,000 RPM disks would help. They’re faster, so that would also help minimize the exposure to a double disk fault. However, 15K RPM disks seem to be more sensitive to cooling issues, making them less reliable and more prone to a fault if the environment isn’t perfect.

Maybe I can make disks irrelevant… no. I can’t. I can push my applications towards enterprise storage arrays, but this is a big issue there, too. Similarly, the movement within to embed hypervisors in hardware just moves the issues to the central disk arrays. I don’t want to shuffle the problem around, I want to solve it. The closest I can get is keeping backup copies of my data in as many places as possible, sharing as few things as possible. Keep a copy of my data in separate data centers, on separate servers and disk arrays, even preferably separate types of media, like tape.

All of that is expensive, though. Money is the ultimate trade-off with this sort of discussion.

For now I think RAID 1+0 plus a spare, on 73 GB 15K RPM disks might be my new direction.

Comments on this entry are closed.

  • Some configurations support RAID 6, which uses two parity drives. However, most of the double-drive failures I’ve seen involve failed communications between the RAID controller and the disks as opposed to two authentic drive failures occurring within a short window.

    One thing I always recommend is a two-drive RAID 1 array dedicated to the OS and possibly applications. Data should be housed on a separate array and everything should be backed up to another location.

    On the 10k vs 15k issue, I use a mix of both in both the 2.5″ and 3.5″ form factors and haven’t really seen a higher failure rate with 15k disks.

  • That’s a great point, and one that I didn’t really consider. The problem is that once the controller marks a drive as dead it’s pretty much “game over.” A communications error also seems to be an argument for reducing the number of disks. The more there are the more chances of one of them causing a problem.

    RAID 6 isn’t an option on the Dell PERC5/i, which is the most common. Maybe the follow-on controllers in upcoming revisions of Dell servers will support RAID 6.

  • Distributed data is definitely the way to go. They do this for instance in hospitals I know, just in case that one building would burn down. In my last job with $ex_employer, we always sold two identical servers, configured as a HA cluster.

    Of course, you need Gigabit Ethernet or even Fibre if you deal with lots of data.

    cheers,
    wjl

  • Another factor to consider is that many times the drives in these arrays are deployed at the same time and happen to be from the same manufacturing batch. An imperfection in a drive due to a manufacturing anomaly would likely apply to the entire batch and cause them to fail after similar usage.

  • Excellent point. But how do you deal with that? Order another server at the same time, which will have drives from the same batch? Wait for two months until you get another batch?

    To me it sounds like a mess. Maybe the way out of this is to keep as little data as possible… (pipe dream, I know). :-)

  • Looks like Dell doesn’t offer RAID 6 on either the PERC 5/i or the PERC 5/E. This is a reason to consider HP. I’ve never been a big fan of Dell’s storage solutions. They sometimes change OEMs (and as a result management tools and array configuration data) between generations, which kills consistency and increases data-loss risk on older arrays as older-generation support dries up.

    We’ve deployed hundreds of servers (Dell and HP) and have found that multi-disk communications errors are much more likely to occur on external storage units than the built-in hot-plug storage bays of the tier 1 servers.

  • We mitigate the risk on our critical systems using RAID 1 for the OS volume and RAID 5 for the other volumes. Each volume has a hot spare. Our SQL stuff is clustered with the databases/logs on a SAN which adds yet another layer of protection. The less critical stuff just uses RAID 1 or RAID 5 and we take our chances. In practice, it has worked well for us. We have survived two multi-drive failures this year alone. Of course, we are a smaller IT shop so we limit this strategy to systems that we define as “critical”. That is, systems that are directly responsible for taking money in and getting product out the door.

    Another thing, HP’s servers will generate a predictive drive failure warning and they will do a warranty replacement before the drive fails. This is has save by butt many, many times.

  • The new Dell PERC 6 controller can deal with RAID 6. The PERC 6 is shipping with 3rd gen 29** series

  • I just ordered a Dell 2950 III with 5x 250 GB SATA in RAID 6. Has anyone else done the same?

    Any thoughts re: the performance degradation of RAID 6 vs. RAID 5?

    Ken.

Previous Post:

Next Post:

%d bloggers like this: