VMware vSphere users should always remember that when you allocate a VM the amount of space it will consume on disk includes a swap file equal to the size of the VM’s allocated RAM.
So if you have 96 GB of VMs running you will use 96 GB of your disk as swap, even if those VMs are only actively using 2 GB of RAM.
Yet another argument against overcommitment, in my opinion. If you right-size your VMs you not only save RAM, but storage as well.
Totally agreed re: avoiding memory overcommitment. I like to think that VM swap is a beneficiary of some of the free VMFS space created by using thin-provisioned VMDKs.
I try to keep our datastores at least 10% free at all times to account for thin-provisioned VMDK growth as well as VM swap disk usage.
Remember, the size of the vswp file is the difference between the reservation and limit. If you have a VM with 8GB of RAM and the reservation as well as the limit are both 8GB your vswp file will be 0
@Rick — incredibly good point. If you tell ESX that the VM has rights to always stay in-memory it won’t need to swap. Of course, the downside is extra management to track that, and a lot of people I’ve seen aren’t using reservations & limits in any way, which is the basis for my post. “Why am I using 1 TB of disk more than my VMs have as disk files?!?” Well…
Does using memory compression in 4.1 affect on-disk swap space?
@Matt — No, it’s only in memory, as best as I understand it, though I have not checked it specifically (just tore down my 4.1 beta test cluster for rebuild, of course).