I’ve spent the morning looking at the new server models from Dell, based on the Intel Xeon 5500 series of CPUs (Nehalem-EP). These things look sweet, but there are some interesting caveats. A few of my observations so far:
1. Intel has killed the front-side bus and in its place implemented QuickPath Interconnect (QPI), a competitor to AMD’s HyperTransport. It’s speed is measured in GigaTransfers per second (GT/s), and is 4.86 GT/s, 5.86 GT/s, or 6.40 GT/s per direction, which according to the Wikipedia article I linked to is 12 to 16 GB/s per direction per link. Cool, but most people are going to pick a CPU based on price point rather than link speed, given that everything in the 5500 series smokes all previous Intel CPUs anyhow.
In multi-CPU configurations the CPUs themselves have a QPI link between them that is used to share L3 cache and memory information. This is a big win in the war against processor affinity, which promotes cache hits but is a management pain. For things like virtualization, where VMs are executing on different CPUs all the time, having cache data available globally will speed things up. No matter where you execute your cached data is close.
2. There are options for 1333 MHz RAM, which achieves this through interleaving across multiples of six DIMMs. Multiples of six DIMMs combined with what seems to be a lack of 8 GB 1333 MHz DIMMs means a maximum of 24 GB of RAM at that speed. Thankfully 800 MHz and 1066 MHz RAM is available, and you can cram 144 GB and 96 GB, respectively, in a server.
One drawback is that with one CPU you can only access a certain amount of memory. For Dell right now that caps a single CPU off at 12 GB of RAM. This probably isn’t the end of the world, as most people stuffing 16+ GB of RAM in a machine will likely opt for two CPUs anyhow. It does point to where things are going with virtualization, though. If you need less CPU than two quad-cores you’re also probably running a hypervisor of some sort, so it doesn’t pay for Intel to worry about the low-CPU, high-RAM users.
3. HyperThreading is back. Might as well have it — you’ve got all this extra CPU sitting there, sleeping, and it could be doing something useful instead. If they could just implement SETI@Home in the CPUs for those truly idle times…
4. Turbo Boost is a new feature, in combination with their new power technologies, where a CPU can essentially overclock itself in certain cases. These CPUs also have new sleep states, including much deeper sleep for the CPU and RAM, so when the server is idle it consumes a lot less power. It’s also interesting that the Intelligent Power Technology lets you customize the power consumption with profiles per application.
5. All that Intel Virtualization Technology that we’ve been hearing about is in these things. Extended Page Tables, VT-c (hardware-assisted I/O), VT-d (directed I/O, or dedicated virtual I/O devices), and VT-x (FlexMigrate) are all in here and conspiring together to make VMs fly.
6. They’ve implemented PCI Express 2.0, which doubles the bandwidth of the slots. Now a 2.0 x8 slot can perform as fast as an 1.0 x16 slot. Most of us will probably only take advantage of this on our workstations/desktops, with more awesome video cards, but for some people with HPC clusters this gets important.
Dell in particular has used this rollout to also launch it’s new management software, replacing the clunky IT Assistant, and also introduce new options to the server line, like solid state disk. Overall, though, I have to conclude that no matter what vendor you use, if you’re building out your own private cloud these new CPUs are pretty sweet building blocks.
My last few VMware hosts have been HP DL585 G5’s with 96GB of RAM. HP released their Xeon 5500 VMmark results yesterday and they’re staggering:
DL585 G5 (16 cores (2.8GHz Opteron 8386SE), 16 threads): VMmark v1.1 20.43 @ 14 tiles
DL370 G6 (8 cores (3.2GHz Xeon W5580), 16 threads): VMmark v1.1 23.96 @ 16 tiles
Guess I’ll be looking at these the next time we go to the well. More performance at half the VMware licensing cost!
Yeah, all mine have been R900s with 96 GB of RAM. I *cannot* wait until a 7500 series CPU drops. OMFG. Plus 8 vCPUs in a VM, yeah.
No reason to do anything but virtual.
A friendly piece of advice – take the time to learn to different between “it’s” and “its”. People who write blogs should know this.
I’ll just leave this as: “typos happen.” It’s just an artifact from a rewrite of that paragraph, and my editor (me) didn’t catch it.
You obviously aren’t a regular reader here (I bitch about homophone abuse periodically) so thanks for stopping by.
@Jon Forrest:
A friendly piece of advice: “Different” is an adjective, “differentiate” is a verb, and “irony” is a noun.
Nice — I hadn’t noticed that.
Some statement about rocks and glass houses goes here. Perhaps people at UC Berkeley don’t believe in glass houses, though.
The main question I have is do I trust my company to version 1.0 hardware? QPI is version 1.0 hardware and VMware beats the tar out of the hardware. In the many years I have worked in IT I have never had version 1.0 hardware handle the loads that I need it to without some serious issues.
All hardware is version 1.0. Unless it’s some showstopping problem which they’ve somehow managed to miss in internal testing all hardware point releases are done in firmware.
I very rarely have hardware issues like you describe, but I am quite vigilant about keeping my hardware’s firmware up to date which seems to help a lot.