Dell PowerEdge R610 & PERC/6i Disk Comparison

I’ve recently done some very basic disk performance testing of a Dell PowerEdge R610 with 24 GB of RAM (1333 MHz), dual Intel X5550 CPUs, a PERC/6i RAID controller, and a bunch of 146 GB 15K RPM 2.5″ disks, as well as four of the Dell 50 GB enterprise SSD disks (which are Samsung drives). I tested various combinations of RAID 0, 1, 5, 6, 10, and 50 with 1, 2, 3, 4, and 6 disks.

While the RAID controller configurations varied, all the configs had the element size set to 64 KB, read policy set to Adaptive Read Ahead, and write policy set to Write Back. The PERC/6i firmware was 6.2.0-013. The operating system was Red Hat Enterprise Linux 5 Update 4, 64-bit, updated to the latest patches as of 3/10/2010. The filesystems were all LVM-based ext3 filesystems, formatted with “mke2fs -j -m 0 -O dir_index.” I used the benchmarking command bonnie++, in the form “bonnie++ -r 32768” to indicate that I had 32 GB of RAM (though I had 24, this ensures that writes and reads are larger than the cache, so caching has a negligible effect on the results). I ran each test three times and averaged the results.

There is a big surprise in this data, which I will have to revisit: the sequential block read performance for 6 disk RAID5. Is that an anomaly in my configuration, or is it really that fast? I will need to revisit that when I set the test environment up again. I would expect results more consistent with RAID6 read performance, but perhaps RAID6 isn’t as mature as the RAID5 algorithms.

This isn’t as complete as it could be, and other disk benchmarks, like iozone, do a better job of characterizing disk performance with random workloads, where the SSD would likely do much better. There are also newer disk controllers out there, namely the Dell H700 with 6 Gbps SAS links, that may improve on these scores. But it’s what I needed for something I’m doing, and if it helps someone else I’m glad I posted it.

Click on the graphic for a larger version that’s more readable.

10 thoughts on “Dell PowerEdge R610 & PERC/6i Disk Comparison”

    • Well, SSDs are really good for random I/O. These tests were more about sequential disk operations.

      What I really want to do is two different tests in the future: number of spindles vs. performance in RAID5 & 6, and random disk I/O on different media.

  1. My understanding of RAID5 on reads is that it is just a striped read (no parity check). So a 6 disk RAID5 should, in essence, read at the speed of a 5 disk RAID0 with a slight loss to the overhead for skipping parity bits.

  2. Doug –

    I am benchmarking a very similar system with the same Samsung SSDs. I am finding a huge bottleneck in IO performance. The PERC/6i is not spreading interrupts across the cores, leading to an IOP bottleneck around 24k IOPS. With this bottleneck, my 4 SSDs are going about as fast as 1/2 SSD. This can be validated by breaking the raid array and testing the drives individually, and at the same time.

    Although my benchmark is highly random, this interrupt bottleneck is likely happening in your system too, and may represent why you’re not getting the power of your SSDs.

    I have updated to the 2.6.33.3 in hopes of a breakthrough driver upgrade, but no dice.

    Any further ideas – feel free to contact me.

  3. Brian — I confirm what you’re seeing with little I/O interrupt distribution. I think the PERC6/i is just an old design. I’m going to order/steal/borrow an H700 and try that, and compare the results.

  4. The H700 firmware will only let you use Dell drives. I doubt you can run the Samsung SSDs on it. The community outrage has been huge over Dell’s decision to make their controller only work with Dell drives, so they are changing the firmware, but I don’t think the new firmware will be out for a few more months. I am using 2 PCIE Fusion IO SLC 10GB SLC cards and getting good results on my Dell R610. It only has 2 10K SAS drives hooked to the PERC 6/I at the momemnt for O/S. However, based on the article, i am thinking of adding 4 15K SAS drives to create a separate RAID-0 for the SQL TX Log, as that is mostly sequential output. Even though the Fusion is faster at sequential, also, that will free it up for more random performance. This is for a high-performance simulation tester with about a 100GB database.

  5. Please set the read policy to “no read ahead”, that’s the recommended setting by LSI for SSD and these cards, in fact I did test it and read speeds were drastically improved with “no read ahead” comparing to “adaptive read ahead”.

Comments are closed.