Standard Server Hardware

Joe Brancatelli’s recent article entitled “Southwest Airlines’ Seven Secrets for Success” points out one of Southwest’s biggest cost saving practices: flying one type of plane.

Unlike the network carriers and their commuter surrogates, which operate all manner of regional jets, turboprops, and narrow-body and wide-body aircraft, Southwest flies just one plane type, the Boeing 737 series. That saves Southwest millions in maintenance costs—spare-parts inventories, mechanic training and other nuts-and-bolts airline issues. It also gives the airline unique flexibility to move its 527 aircraft throughout the route network without costly disruptions and reconfigurations.

Standardization of server hardware pays off for organizations, too. In the somewhat distant past each member of my team was responsible for specifying and ordering servers for their projects. There was some documentation on how to do it, and some standards, but they were followed pretty loosely. As a result we ended up with servers sporting different disk sizes, single power supplies, different warranties, with or without hardware RAID controllers, and with wildly variable CPU speeds. Each machine is a one-off, and when you’re talking about 500 machines it starts being a complete nightmare. Sure, a project might save $200 by ordering a slower CPU, but that $200 disappears quickly in staff time for documentation and other work that has to be done one server at a time. Nothing is easily interchangeable between servers, and servers are not interchangeable with each other. Ick.

Now my team has two standard server configurations that we order when we need physical hardware. We buy all of our x86 gear from Dell, so we have standard configurations for PowerEdge 1950 and 2950s, where only the number of CPUs and amount of RAM is variable. Everything else is standardized, with two power supplies, hardware-RAID 1 or RAID 5 146 GB 15K disks, 3.0 GHz Intel 5450 CPUs, five year warranty, etc. Dell has even given us a custom page on their web site that allows us to order machines in these standard configurations. An admin selects the type of machine, number of CPUs, amount of RAM, and submits the order. All the standard choices are checked & uneditable.

Despite initial reservations about a one-size-fits-all hardware policy, in practice we rarely need to go outside the standard configurations. Like Southwest, when you’re flying all the same servers you can stock common spare parts. You can even stock spare servers and skip the costly 24×7 hardware replacement warranties, since servers are more interchangeable. With a standard warranty we know exactly what type of service we’re getting and when it ends, and by getting the full five-year warranty up front we don’t have to spend time later renewing warranties.

Standardization like this seems like an obvious idea, but many organizations are like the frog in the soup pot: you don’t notice your demise until it’s too late. When you go from 50 servers to 500 in just a few years it’s easy to focus solely on keeping up with demand. But eventually you need to think about saving time and money. Standardizing wherever possible is a great way to save both.

7 thoughts on “Standard Server Hardware”

  1. I suspect that the bigger benefit of standard hardware will occur when you can implement standard server operating system images.

    In theory, you should be able to maintain a very small number of hardened, patched, standardized sysprep’d boot images. That has the potential to make new server deployments fast, easy, and error free.

  2. There must be a trick to doing a standard image on non-standard hardware? I’d have figured that you’d have all kinds of driver & boot issues.

    In any case, the ‘menu of servers’ is a good idea.

    –Mike

  3. Since we work with Dell hardware there were just a few drivers to worry about. Our Windows images just need updating when new models ship, or if someone ordered a strange model (which doesn’t happen anymore).

    For Linux we use Kickstart, which basically does a new install each time. We rely on the drivers built in to RHEL. After the Kickstart the machine hardens/preps itself automatically with our configuration management tools.

  4. Lovely plan, and I try to follow it.

    However, I’ve seen many a department where that $200 saved is much more real than the > $200 spent in time and effort maintaining disparate infrastructures because IT should just be able to absorb that work.

  5. This is really the reason we decided to take the big financial plunge and go with blade servers. We bought two enclosures ($5k each) and 20 servers ($2k each), with identical configurations has proved to be a very superior administrative experience to a piecemeal rack build.

    You also eliminate my pet peeve of differing server depths where short servers inevitably wind up between two longer servers, and you can’t reach the cable ports.

Comments are closed.