This is post #13 in my December 2013 series about Linux Virtual Machine Performance Tuning. For more, please see the tag “Linux VM Performance Tuning.”
a prefix appearing in loanwords from Greek, most often attached to verbs and verbal derivatives, with the meanings “at or to one side of, beside, side by side” ( parabola; paragraph; parallel; paralysis ),”beyond, past, by” ( paradox; paragogue ); by extension from these senses, this prefix came todesignate objects or activities auxiliary to r derivative of that denoted by the base word ( parody;paronomasia ), and hence abnormal or defective ( paranoia ), a sense now common in modernscientific coinages ( parageusia; paralexia ).
Paravirtual drivers are ones where the virtualization platform does not have to emulate another device, such as an Intel E1000 NIC or a LSI Logic SAS SCSI adapter. These paravirtual drivers essentially cut the middleman out by ditching the emulation layer, which usually results in significant performance increases. The switch from an LSI Logic SCSI adapter to the pvscsi adapter often results in a 20% disk I/O performance jump. The switch from an Intel E1000 NIC to the vmxnet3 adapter sees lower CPU utilization for the same amount of throughput. And the nice thing is that VMware open-sourced the pvscsi and vmxnet3 drivers to put them in the mainline Linux kernel, so by now nearly all distributions have access to them even without installing the VMware Tools.
Seems like an obvious win, right? Much of performance tuning seems obvious but is often either not the default setting or the setting you’re currently using. This is especially true for these paravirtualized device drivers. For example, under VMware vSphere 5.1, while Red Hat Enterprise Linux 6 virtual machine defaults specify a paravirtual SCSI adapter (pvscsi) and the paravirtual NIC (vmxnet3), neither CentOS 6 nor Oracle Linux 6 settings specify that. Similarly, most recent versions of SuSE Enterprise 11 support pvscsi and vmxnet3, but those aren’t the defaults, either. And imagine what’s happened if you’ve done in-place OS upgrades…
Oh heck, I’ve got a bunch of VMs without paravirtualized devices! What now?
- If you are using modern provisioning techniques, such as those that use configuration management tools like Puppet or Chef, build a new template and redeploy your applications. When you’re building a new template with Enterprise Linux variants you might consider just listing them as “Red Hat Enterprise Linux 6,” or do a custom install where you change the defaults.
-
If you have VMs that have the default settings you can change them with a little work and a reboot:
- Older Enterprise Linux versions need some messing around in /etc/modprobe.conf. I’ve outlined this technique in my post on How to Change SCSI Controllers on Your Linux VM. Your new SCSI option will be the “pvscsi” module. For the NIC you will have to remove the old one and re-add a new one. The driver technique is similar, and depending on the age of your OS you might want to see if the VMware Tools installer will handle it for you.
- With Enterprise Linux 6 and newer you can just switch the SCSI adapter in the VM settings. To change the NIC type you need to remove the old NIC and add a new one of type vmxnet3.
-
When you remove and readd NICs you will likely encounter two problems:
- Your MAC address will change. Certain terrible apps care about this. You might be able to combat this either by setting the MAC address as a configuration property in the VM hardware config, or by forcing the MAC address through the OS-level driver.
- Your NIC won’t show up as eth0 anymore. The reasons for this, and the fix, are in my post on “Why Does My Linux VM’s Virtual NIC Show Up as eth1?“
One other thing to remember is that VMware snapshots also snapshot the configuration, so before you do any of this work make sure you have a good system backup and a good snapshot to roll back to. And, as always, try it out on a crash & burn VM first.
Hi Bob – great post – I was recently asking the same question on the vmware forum
Q: “If PV drivers are now considered best practice, why are they not the default”?
The consensus was not all guest OS’es support it natively – so still not best practice (admin overhead vs benefit)
Another good point was made:
“A reason why I usually do not use it for system disk is in a troubleshooting scenario. If using a boot cd with tools it will typically not have drivers to access the PVSCSI controller.”
https://communities.vmware.com/message/2327532
Happy 2014 all