Disk Performance of a 16-drive Dell PowerEdge R910

You can order Dell PowerEdge R910s with as many as 16 disks attached to the H700 controller. We did. Since it’s hard to find benchmarks out there for what you can expect out of local storage performance I ran a few tests on it. I didn’t have a whole lot of time to run a comprehensive set of tests, as the box wasn’t mine and it needs to get deployed, but I was able to get some basic performance statistics.

These tests were conducted with the integrated H700 controller, running firmware 12.3.0-0032 A02. That controller has 1 GB of NVRAM cache on it, and in each test case the container was set to the default stripe element size of 64 KB, read policy of Adaptive Read Ahead, and write policy of Write Back. The drives were Seagate Savvio 15K.2 6-Gb/s 146 GB disks, ST9146852SS, quantity 16, as shipped from Dell. Each container was allowed to finish its background scrubbing/initialization before the tests were run.

The host itself has four 6-core Intel E7540 CPUs at 2.00 GHz, and 128 GB of RAM. I used Red Hat Enterprise Linux 5 Update 5 for testing, kernel 2.6.18-194.3.1.el5, with the mem=2G flag to limit the RAM used for filesystem cache (to save time). The test filesystem was managed via the Linux Logical Volume Manager, was 800 GB in size, formatted as ext3 with “mke2fs -j -m 0 -O dir_index.” I used both iozone and bonnie++ to measure the performance, running bonnie++ three times for each test and averaging the results. Bonnie++ was invoked with “bonnie++ -r 4096 -n 256 -u 0:0” and iozone was invoked with “iozone -Ra -g 4G -i 0 -i 1.”

I tested RAID10 in two different configurations, and RAID6, across all 16 disks. The first RAID10 configuration was as-shipped from Dell, where the mirror pairs were SCSI IDs 00 & 01, 02 & 03, etc. I call this the “vertical” setup. The second RAID10 configuration was where the mirror pairs were SCSI IDs 00 & 08, 01 & 09, etc. I refer to it as the “horizontal” setup.

bonnie++ Results (in Kb/s):

Dell PowerEdge R910 16 Disk Test - bonnie++ results

RAID6 is the clear winner in read performance, topping out around 955 MB/s., while the non-default RAID10 setup wins at writing at around 497 MB/s. The relative similarity of the write performances makes me wonder if some other bottleneck is present. What is also interesting to me is the performance gain when I redid the default RAID10 setup, to the “horizontal” 00 & 08/01 & 09/etc. setup. It seems that by sticking with what Dell ships you are robbing yourself of 225 MB/s of read performance.

iozone Results:

Dell PowerEdge R910 16 Disk Test - iozone - RAID6 - Read

Dell PowerEdge R910 16 Disk Test - iozone - RAID6 - Write

Due to lack of time I was only able to run iozone against the “horizontal” RAID10 setup of mine:

Dell PowerEdge R910 16 Disk Test - iozone - RAID10 - Read

Dell PowerEdge R910 16 Disk Test - iozone - RAID10 - Write

In both cases you can clearly see the effects of the cache on performance. As the file sizes grew up past what the 2 GB filesystem cache could handle the write and read performances dropped off considerably. It is interesting to see the marked read performance difference between the RAID6 and RAID10 tests, where the RAID10 cached performance was much greater. With more time I’d have liked to explore this further, but for now, this will have to do. 🙂