I noticed this morning that the RAID containers on my new Dell PowerEdge x9xx servers don’t have their caching options enabled. Now, I understand that write caching is potentially risky, and I understand why Dell ships the PERC 5/i controllers without write caching. However, no read caching? That doesn’t make much sense.
While I was mucking around I did some benchmarking of the controllers with different cache settings. I used Red Hat Enterprise Linux 4 Update 4, running on a Dell PowerEdge 2950 with six 146 GB 10K SAS disks. The filesystem is a 20 GB ext3 volume created in LVM, mounted with data=writeback. I used bonnie++ to generate load, which for that tool means sequential writes and reads. Each test was run five times and averaged, then the machine was rebooted to change the controller settings. While the machine was in multiuser mode (runlevel 3) I was alone on it, and it was not doing anything but my testing.
Note: if you have a good tool to test random I/O please leave me a comment with the link. The most promising tool I could find was POSTmark, from NetApp, but it looks like they removed that from their site. I could write something, too, but I didn’t have time here.
Anyhow, the results are fairly obvious. The graph is:
The conclusion, at least for sequential reads and writes, is to turn your cache on for maximum performance. No surprises there. :-)
Update: The array was configured as RAID 5, across all six disks.