VMware vSphere 4 Thin Provisioning: Pros & Cons

vSphere 4’s thin provisioning is a pretty cool feature, but it has downsides, too. I was putting together a concise list of pros & cons for a customer, and I thought I’d share (especially given all the thin provisioning talk lately). Please leave me a comment if I’ve missed something.

Pros:

  • Saves disk space where it isn’t really being used by permitting overcommitment, meaning:
    • more VMs per datastore, which, for local datastores, means more VMs per host.
    • better utilization of expensive storage.
  • Smaller disk allocations translate into faster storage VMotions, clones, snapshot operations. You are only copying what needs to be copied.
  • Incredibly easy to convert to and from thin-provisioned disks, on the fly, using Storage VMotion.
  • More flexible disk allocation strategies. VMs could have extra, unallocated space built into them, making it easy to grow later without adding additional virtual disks but not consuming all that space initially.

Cons:

  • Requires VM hardware version 7. Not the case — changed block tracking is a VM v.7 feature, but thin provisioning can be done with v.4 hardware, too.
  • Cannot use other advanced features of vSphere 4, such as Fault Tolerance.
  • Normal maintenance operations, such as defragmentation, use of sdelete, etc., rapidly & irreversibly negate thin provisioning by causing blocks to be changed. This is especially important as strategies to maximize deduplication by zeroing filesystem blocks negate almost all benefits of thin provisioning.
  • Overcommitment of storage adds the risk that a volume may fill, causing a denial-of-service for other VMs. This can be through malicious behavior by a customer, through normal day-to-day use of VMs, or through well-intentioned but uninformed behavior (such as running a defragmenter, etc.).
  • Thin provisioning may have performance concerns:
    • The gradual growth of a VMDK file will likely cause fragmentation, which may be a performance issue. On disk arrays that are already subject to fragmentation, such as those from NetApp, the effect may be more severe. However, Storage VMotion operations also serve to defragment virtual disks.
    • More VMs per LUN may introduce storage I/O performance issues.
  • General understanding of how filesystems work is low. Add to that a general lack of understanding of how thin provisioning works, and how it would interact with other technologies like deduplication and snapshots, and I can see the potential for colossal mishaps.

Some of the cons are mitigated with better monitoring strategies. vCenter has a number of new ways to monitor thin provisioned VMs, and notify when datastores fill. However, if you’re doing deduplication on your storage array you might have to choose which technology to go with. Many people use sdelete or custom scripts to zero out empty filesystem space so that deduplication can identify and deduplicate free space. Running “sdelete -c” on a thin-provisioned 40 GB VMDK file causes it to grow to 40 GB, though. On the back end I know it’s being deduplicated very well, but on the front end it isn’t thin anymore, and can’t be made thin again with Storage VMotions because all those blocks have been “touched.” Coupled with fragmentation and other performance issues, users of deduplicating arrays (NetApp, etc.) might consider not thin provisioning for now, and work to improve their back-end deduplication rates instead.

P.S. if thin provisioning were coded to recognize zeroed blocks it’d be a different story altogether. Then normal filesystem use (file creates, deletes, etc.), or use of sdelete, defragmenters, etc. wouldn’t be a problem at all.

Comments on this entry are closed.

  • Thanks for the great summary! What are your thoughts on using VMWare to handle the Thin Provisioning vs using the array side hardware to do it?

    We were getting ready to implement on the array side, but with this ability in vSphere we’re taking a fresh look at things.

    Thanks,

    @ddbrown

  • Great article!! Thank you for writing it.

    I don’t have a choice but to use VMware thin provisioning, we have a low end SAN…

    When you mention de-fragmentation, do you mean running a defragger inside Windows like Diskeeper or JKDefrag/MyDefrag (we use this one)??

    Thank you, Tom

  • Thanks guys!

    @David – as with all things, it depends on what you are looking for. I have a NetApp array and just have chosen to do the dedupe on the array side, and forgo thin provisioning in VMware. That said, I am using it in a few cases for template & test VMs.

    @Tom – Yeah, I use MyDefrag, and that’s the sort of thing I meant.

  • EVERYONE who reads this article should put a feature request to VMware to recode thin provisioning to recognize zeroed blocks. I just now did this myself.
    Thank you, Tom

  • Hi Bob,

    “Requires VM hardware version 7.”

    I think that isn’t correct. I’ve several TP VM’s running on vSphere with Hardware version 4.

    TP was even possible with ESX 3.5 (commandline).

    Greetings from Germany
    Bj

  • you can create vm hw version 4 on vsphere with thin provisioned disk

  • @Bj — You’re right. Thanks! I’ve updated the post. I was thinking of changed block tracking.

  • When discussing the merits of “thin provisioning” it’s sensible to stress the second word rather than the first — I think the only long-term real-world benefit anyone can expect to realize of thin provisioning is that it’s really easy to set up and tear down a VM. This is very useful in a lab/testing environment, or when moving with very fast-paced IT requirements where someone needs an OS instance created almost immediately.

    Storage is so cheap now compared to I/O throughput that I think the disk savings are going to be minimal no matter what — if you’re running on SATA, for example, you’re much more likely to hit that 80 IOPS barrier before you hit the 1 TB one. For production workloads, your application data is going to be what takes up the majority of your disk usage anyway; 15GB here or there for a Windows install or 2-5GB for a Linux/Solaris install is a drop in the bucket. The most common size for a 15K FC disk these days seems to be 300 GB for 150-160 IOPS, which is closer to a good throughput/capacity ratio. SSD may change the game in the future as it becomes more cost-effective.

    I do think that for testing environments where you do have enough VMs to kill your disk with OS instances alone, array-side deduplication isn’t a bad idea as long as it performs well. It does, however, succumb to some of the same disadvantages as VMware thin provisioning — as things move around, the representation that the inline dedupe sees changes accordingly. Chances are that, sticking with default settings, you’re going to be packing multiple files into whatever your dedupe implementation considers to be a single entry in its hash table, and this can cause things to fall out of whack in a hurry.

    Most dedupe implementations use rather large block sizes to hash, often >= 128K, and to really make it work you need to be very familiar with the lower-level when tuning the block size of your filesystems. Most operating systems don’t let you customize the block size of the filesystem you’re installing onto unless it’s created in advance.

    At a bare functional minimum, though, yes, dedupe is perfectly adequate for minimizing the on-disk impact of block-identical clones. And there’s certainly something to be said for its ability to pack together blocks full of zeroes.

  • Some backup/dedupe software products are able to recognize empty file system space so you don’t need to zero out blocks. The same applies to the left-over blocks after you delete data from your VM. The references are removed, but the blocks remain filled until new data (and a reference) is written on them.

  • I’ve made the mistake of believing VMware’s whitepapers claiming that VMFS fragmentation isn’t worsened but have ended up with some very, very, very badly fragmented volumes containing SQL databases that run at about 5% of their expected speed (CPU use is low). Having seen this, I’d not recommend Thin Provisioning at all. I’m Storage vMotioning them as I write this and they are taking an age to move (presumably because each block is nowhere near the last).