Why I Like Virtualization (And Why Hardware Sucks)

I was asked why I like virtualization and why chroot jails aren’t a better way to do things, at least on UNIX-like OSes. To figure out why I like virtualization, let’s start with what I don’t like about hardware:

  • Failures. Something is always going wrong, whether it’s a fan, disk, power supply, etc. More servers means more failures. We buy warranties to help with this, but warranties cost money. It takes time to deal with failed components, too.
  • Firmware. It is hard to update firmware levels. Every device is different, and a bunch of update methods end up requiring you to go out to the box with a USB stick or a floppy disk. That takes a lot of time, usually at times of the day I’d rather be somewhere else (like sleeping).
  • Cables. I hate cabling. It costs a lot of money, a foot at a time. It gets tangled. It gets unplugged. It gets mislabeled. It takes a lot of time and vigilance to do right.
  • KVM. My KVM system is IP based. It uses Cat5 cable and has USB dongles that attach to the hosts. It costs a ton of money. The dongles have strange incompatibilities which make it an adventure to connect a server. It’s also another cable to manage, another system to maintain, another drain on your time.
  • Racks. Racking a server means I have to go to my data center, which may not be in the same building or city as me. I have to worry about available rack space, power in the rack, and cooling. I have to worry about two post or four post racks, which type of holes in the rack, and whether I have the right screws. Racks cost a lot of money, too.
  • Power. I hate power cords. They get tangled, messed up, unplugged. We order short power cords to help with that, but those cost money. To keep things running we have a UPS. UPSes are expensive and require maintenance. Speaking of money, I hate paying power bills, too.
  • Cooling. Cooling requires equipment, which in turn requires maintenance. It also requires power. Maintenance and power require money. Did I mention that I don’t like giving other people money? I want to keep it and buy myself cool things. :-)

It basically comes down to money and time. Time is really money, though, because to get more time you have to hire someone to help.

So how do you reduce these costs? Simple: have less hardware. Why can’t we have less hardware?

  • Applications aren’t good at sharing. They require specific versions of other software, different from what the other applications require. For example, an application written in Perl might require DBD::Oracle 1.14, another 1.19. Now I need two different copies of it and it isn’t simple anymore, especially if the applications assume that DBD::Oracle will be installed in /usr/lib/perl5.
  • It can be hard to figure out performance problems on a machine that is doing a lot of different things.
  • It is hard to tune a machine for performance with multiple different applications. Do I tune for Apache or MySQL?
  • Customers have wildly different security requirements.
  • Customers have wildly different maintenance window needs.
  • Customers want to build clusters. How do you share a cluster with an application that isn’t clustered?
  • Customers just don’t want to share. They don’t want anything to do with another project or customer. They want their own machines and want to have their way with them. Coordination between customers is sometimes impossible.
  • Customers want separate development, test, QA, and production environments. They want to be able to do load testing and other crazy things without impacting or being impacted by someone else’s software. Unfortunately, development, test, and QA environments will sit mostly idle over their lifespans.

Approaches like chroot jails are great for single, well-known applications. The more applications you need to run in a chroot jail the harder it gets to maintain, though. You end up needing second, third, fourth copies of libraries, binaries, etc., especially if you’re trying to chroot interactive users. Automatic patching processes like yum, up2date, etc. don’t update your jails so you have to do that manually. The jail doesn’t address things like differing maintenance windows, or any of the non-technical problems of sharing a machine. This approach may have performance benefits, but to be honest performance is usually not a big factor.

Most of the problems in sharing a machine are actually problems with sharing the operating system. Virtualization decouples the hardware from the operating system, and because of that we can solve all the problems of sharing a machine by choosing not to share at all. No chroot jails to maintain, no worries about versions of software, no endless coordination meetings just to schedule a reboot of the server.

So why do I like virtualization? It lets me get rid of hardware but doesn’t force me to manage complex situations that arise from a shared OS. For the operating system it is business as usual, which means relative simplicity and well-understood processes. I like that.

Comments on this entry are closed.

  • Excellent post. For my situation, seeing as I work for a K-12 school district, it’s very difficult to justify the up front costs for virtualization. Between the hardware costs for a beefy Dell 6000 series box and the high costs of licensing Vmware, it’s too much for my situation. If our district was twice the size it is, we would be getting to the point where virtualization might be doable.

    My only real option is to wait and see what Novell does with Xen in the next couple of years. That might allow me an avenue to explore the technology and save some bucks on power and cooling. I’m maxing out the cooling right now with one rack of servers and we managed to blow a 20 amp fuse out earlier this year and all of our machines lost power when we tripped the breaker on the generator.

    Sorry for the rambling… ;)

  • Thanks Bob. Very nice article. I don’t know about the state of the current virtualization softwares. If the open-source and free (Xen/QEMU?) ones are good enough and whether they support migrating a live system to another machine. If they are already like that then that’s awesome.

  • Pooya: Xen is pretty good and reliable. I trust a vendor like Redhat to provide us with a good (and stable) Xen environment.

    Live-migration is supported, but that requires some kind of shared storage and sometimes even supports booting from a shared drive, so the hardware can remain diskless.

    But shared storage management will make the deployment more complex of course. But you’re reading the Lone Sysadmin and already know that ;-) Another issue is that only one shared storage node becomes the single-point-of-failure for all your VM’s. How do you treat this SPOF in your environments Bob?

    I guess everything has it’s pro’s and cons.

  • Right now storage is the Achilles Heel of virtualization. There is no way around your storage as a single point of failure… :-(

  • Well, I guess in theory storage can also be distributed. Hmm.. what was that? Coda? Lustre? I should look into them to see if there’re reliable.

  • I’ve just starting reading the short paper, about cooling-efficiency, being presented at Usenix this week [Cullen Bash and George Forman: “Cool Job Allocation…”].

    They suggest some neat ideas, for future data centres, such as moving around VMs to dynamically migrate jobs to improve cooling efficiency.

Previous Post:

Next Post: