For planning purposes I just did some power draw testing of a Dell PowerEdge R610. Dual Intel X5550 CPUs, 24 GB of RAM, four SSD disks attached to the PERC6/i, and dual 717 Watt power supplies. My testing methodology was to measure the draw using a Fluke 322 clamp meter, both at idle and running a stress test under Red Hat Enterprise Linux 5 (stress -c 32 -d 8 -i 8 -m 16). I did this with one and two power supplies active.
1 PS, idle: 0.65 Amps @ 202.3 Volts = 131.5 Watts
1 PS, loaded: 1.51 Amps @ 202.3 Volts = 305.5 Watts
2 PS, idle: 0.35 Amps @ 202.3 Volts = 70.8 Watts each (total of 141.6 Watts)
2 PS, loaded: 0.77 Amps @ 202.3 Volts = 155.8 Watts each (total of 311.6 Watts)
Virtualization users who didn’t see the VCritical commentary on “Idle RHEL Hypervisors save power?” might want to check that out, since these numbers directly support Mr. Gray’s argument. An idle server drawing 45% of the power of a loaded server is a pretty solid argument for VMware DPM.
Also note that the label on the power supplies indicates these are 717 watt power supplies, while the maximum draw I recorded was less than half that. While building in some overhead is a good idea, using the labels to determine draw isn’t a great idea, because you will overbuild your infrastructure.
Update: As was pointed out in the comments, you can use the Dell iDRAC web interface to find out the current power consumption of the 11th generation Dells. However, it looks like that has accuracy problems, particularly at the low end (idle), but is relatively usable at the top end. If you’re serious about measuring this stuff I’d still get a meter, though.