Hey! Go Vote For This Blog!

Hey readers, I’m on the ballot for the top 25 virtualization blogs this year. If you are a virtualization person (and who isn’t these days) would you go to http://vote.vsphere-land.com/ and vote for the blogs you read? It takes about a minute, tops. This blog is “The Lone Sysadmin (Bob Plankers)” in the middle column with all the other “The” blogs, and I’d appreciate you including me in your 10 votes, if you can. I’m an independent blogger, too, but didn’t make the independent list. Independent bloggers are important to the computing community because we aren’t required to have certain opinions by our employers. Regardless, it’s stuff like this that keeps me interested in blogging because it’s a way to …

Read More

Can't Change Virtual Disk Formats When Targeting a Datastore Cluster

As I work more and more with vSphere 5 I am finding a few anomalies. One of them appears to be a bug where you cannot switch a VM’s disk format during a storage vMotion, when you target a datastore cluster. To be more precise, it looks like you should be able to, but it doesn’t end up happening. The workaround is to disable Storage DRS for that VM, target a datastore directly, then edit the Storage DRS settings afterwards to re-enable DRS for that VM. This is what it looks like when I try. I select “thin provision” from the virtual disk format, choose a datastore cluster (in this case it’s my “Tier 2” cluster), and click next:   …

Read More

Change the Default PSP in VMware vSphere 5

One thing I do to my VMware ESXi hosts is set the default Path Selection Policy (PSP) for certain Storage Array Type Plugins (SATPs) to do the right thing. This eliminates my need to reconfigure each datastore’s multipath settings on each host, and helps guarantee that a new LUN added by someone other than me will function correctly from the start. Consider it part of my “make it easy to do the right thing” sysadmin mantra. This has been covered by others at various points for older vSphere versions, but the vMA & esxcli changed some with version 5, so here are the commands I use. I have two different arrays, one that is active/active without a specific SATP (so …

Read More

Use 'for' Loops with the vSphere Management Assistant

The VMware vSphere Management Assistant (vMA) claim to fame is that it has a UNIX shell and the vSphere CLI installed, making it handy for a lot of things, and makes cutting & pasting comands real easy when it’s paired with a decent SSH client. One of my favorite ways to use it is with ‘for’ loops in the shell, to make the same change to all of my ESXi hosts. Let’s say you have a list of servers you want to make a change to, like using esxcli to set the HBA queue depth. My list is a text file I create in nano or vi (see my post on installing nano on the vMA), one host per line: …

Read More

Install the nano Editor on the VMware vMA 5

The VMware vSphere Management Assistant (vMA) is a handy appliance for interacting with your environment via the Linux command line. I use it a lot, and I’m starting to get more of my team to use it. The problem is that it only ships with the vi text editor, which, described politely, is sort of arcane. Being a UNIX guy I’m used to it, but for others that just want to edit a file it’s overkill. For those situations I like nano, a simple open source editor. To install it on the vMA issue the command: sudo zypper install nano If you’re prompted for a password use the one you set for vi-admin (or whoever you’re logged into the vMA …

Read More

vCenter Hardware Status Stops Polling After 1 Hour

(Update, 1/19/2012, 1130 CST: The product manager for this feature, commenting below, has indicated this is actually a bug, and I’ve emailed her the details of my case so she can help track down where the information I was told came from, and fix my problem, too) —————— For what seems like an eternity I’ve had a support case open with VMware because the hardware status functionality in vCenter (4.1 and 5) stops updating. I was told today by my support guy that, for a variety of reasons that cannot be known by me, VMware has decided that the hardware status polling should stop after 1 hour. So my bug isn’t a bug, it’s a feature, case closed. I am …

Read More

The Mechanic's Car

It’s been almost a month since I’ve been able to post here. I’m hoping it’s like riding a bicycle, and I’ll get the hang of it again. I knew I’d be insanely busy throughout December, so I scheduled a bunch of posts (one a day) to auto-post. I figured I’d have time to tend comments when I had connectivity, but not any time to do any real writing. I’ve been very unhappy with WordPress’ scheduling mechanisms. Perhaps it’s just user error (PEBKAC, even — Problem Exists Between Keyboard And Chair). As a result I decided to delve into the XML-RPC stuff and write my own autoposter script. Something I could schedule in cron, and have it tweet some promotional stuff, …

Read More

How Large Your Linux Swap Partition Should Be

This is post #4 in my December-long series on Linux VM performance tuning, Tuningmas. One of those timeless questions in system administration has always been “how much swap space do I configure on my server?” The old rule used to be twice the amount of memory, but does a server with 256 GB of RAM really need a half terabyte of swap? And what about VMs? Swapping on VMs is a serious performance drag. Would it be a good idea to just disable swap completely? One thing to consider is that there’s a tunable kernel parameter, /proc/sys/vm/swappiness, that controls the tendency of Linux to scavenge inactive memory pages and swap them out. It is a number from 0 to 100, …

Read More

Leave Some RAM For Filesystem Cache

This is post #3 in my December-long series on Linux VM performance tuning, Tuningmas. Many system administrators don’t realize it, but in most OSes RAM that’s unused by applications goes towards filesystem cache, which speeds disk operations. Some VM “right-sizing” tools don’t take this into account, and recommend pretty tight memory allocations which end up causing more disk I/O in the long term. Trading some RAM for better I/O performance is often a very good move, both for an individual VM and for the virtual environment as a whole. To understand what’s happening on a Linux VM and make a decision about how much RAM we should leave for filesystem cache we need to understand the ‘free’ command: total used …

Read More