Table Stakes for Storage Arrays

I was just looking at Andreas Lesslhumer’s post about blog posting volume in the virtualization community, and it’s depressing. I didn’t blog a whole lot here last year. Why was that? Because I was writing elsewhere! Speaking of that, the first half of my “Six Features You Absolutely Need on Your Storage in 2015” list is up over at The Virtualization Practice, wherein I outline what the table stakes are for enterprise storage arrays, get only slightly snarky about why we’re still discussing, as an industry, why & how to use flash, and highlight the good work some vendors are doing (SolidFire, Dell, and Tintri in this post, more in next week’s second part). Check it out.

Use a RAM Disk to Improve Disk Access Times

This is post #15 in my December 2013 series about Linux Virtual Machine Performance Tuning. For more, please see the tag “Linux VM Performance Tuning.” One of the biggest things folks in IT worry about is data loss. We go to enormous lengths to protect our data, using backups, snapshots, remote replication, rsync, scp, temporary copies in our own home directories… you name it. The thing is, as we look at our systems we sometimes discover that our applications do a lot of writing of temporary files. These temporary files often don’t need any particular protection because they’re transient, yet we write them to our expensive, already overtaxed disk arrays, commit the writes over long distances to our DR sites, …

Read More

Adjust vm.swappiness to Avoid Unneeded Disk I/O

This is post #11 in my December 2013 series about Linux Virtual Machine Performance Tuning. For more, please see the tag “Linux VM Performance Tuning.” The Linux kernel has quite a number of tunable options in it. One of those is vm.swappiness, a parameter that helps guide the kernel in making decisions about memory. “vm” in this case means “virtual memory,” which doesn’t mean memory allocated by a hypervisor but refers to the addressing scheme the Linux kernel uses to handle memory. Even on a physical host you have “virtual memory” within the OS. Memory on a Linux box is used for a number of different things. One way it is used is internally for buffers for things like network …

Read More

Zero Out Free Space

This is post #10 in my December 2013 series about Linux Virtual Machine Performance Tuning. For more, please see the tag “Linux VM Performance Tuning.” When we talked about the rationale behind storing logs centrally one big reason was thin-provisioned virtual disks. Those disks grow over time because filesystems on a virtual machine currently have no way to tell the underlying storage that they’re done using certain blocks on disk. There is a way to make these VMs thin again, and I wrote about it as step 9 in my guide to preparing Linux Template VMs. In short, we run a script on the VM that writes zeroes to most of the free space on the VM: #!/bin/sh # Determine …

Read More

Use elevator=noop For Linux Virtual Machines

This is post #6 in my December 2013 series about Linux Virtual Machine Performance Tuning. For more, please see the tag “Linux VM Performance Tuning.” Modern operating systems are fairly modular, and often have different modules to deal with memory, network I/O, and CPU scheduling. Disk I/O is no exception under Linux, as there are usually four different schedulers a sysadmin can choose from. Red Hat had a nice write-up on these a few years back and it remains relevant today: The Completely Fair Queuing (CFQ) scheduler is the default algorithm in Red Hat Enterprise Linux 4. As the name implies, CFQ maintains a scalable per-process I/O queue and attempts to distribute the available I/O bandwidth equally among all I/O …

Read More

Free Upgrade to 25 GB for Microsoft SkyDrive Users

I don’t know all the details, so it may not apply to everybody, but if you’re a SkyDrive user (or just have an account) you might be eligible for a free upgrade from 7 GB to 25 GB (I haven’t heard of anybody not being eligible, though). Log in to https://skydrive.live.com/ Click “Manage Storage” on the bottom of the left navigation column. Click the magic button to upgrade your SkyDrive Free plan from 7 to 25 GB. It is my understanding that this is a limited-time offer, so get on it. It takes about 20 seconds if you know your password. 🙂

Why Is It Called "Resilvering?"

Q: Why do some people refer to the process of remirroring or rebuilding a RAID 1 drive set as “resilvering?” A: Antique mirrors (the reflective kind you hang on a wall, or are in your bathroom) used silver (Ag) for the reflective coating, below the glass. Over time that silver would get tarnished and/or damaged, so you’d restore them by re-silvering them. I’m sure you’ve all seen this, where an old mirror has streaks in it but they’re below the glass. When your RAID 1 mirror set gets “tarnished” you resilver it and it’s shiny & new again. You can rebuild a RAID 5 array but you resilver a mirror. 🙂

Can't Change Virtual Disk Formats When Targeting a Datastore Cluster

As I work more and more with vSphere 5 I am finding a few anomalies. One of them appears to be a bug where you cannot switch a VM’s disk format during a storage vMotion, when you target a datastore cluster. To be more precise, it looks like you should be able to, but it doesn’t end up happening. The workaround is to disable Storage DRS for that VM, target a datastore directly, then edit the Storage DRS settings afterwards to re-enable DRS for that VM. This is what it looks like when I try. I select “thin provision” from the virtual disk format, choose a datastore cluster (in this case it’s my “Tier 2” cluster), and click next:   …

Read More

Change the Default PSP in VMware vSphere 5

One thing I do to my VMware ESXi hosts is set the default Path Selection Policy (PSP) for certain Storage Array Type Plugins (SATPs) to do the right thing. This eliminates my need to reconfigure each datastore’s multipath settings on each host, and helps guarantee that a new LUN added by someone other than me will function correctly from the start. Consider it part of my “make it easy to do the right thing” sysadmin mantra. This has been covered by others at various points for older vSphere versions, but the vMA & esxcli changed some with version 5, so here are the commands I use. I have two different arrays, one that is active/active without a specific SATP (so …

Read More

How Large Your Linux Swap Partition Should Be

This is post #4 in my December-long series on Linux VM performance tuning, Tuningmas. One of those timeless questions in system administration has always been “how much swap space do I configure on my server?” The old rule used to be twice the amount of memory, but does a server with 256 GB of RAM really need a half terabyte of swap? And what about VMs? Swapping on VMs is a serious performance drag. Would it be a good idea to just disable swap completely? One thing to consider is that there’s a tunable kernel parameter, /proc/sys/vm/swappiness, that controls the tendency of Linux to scavenge inactive memory pages and swap them out. It is a number from 0 to 100, …

Read More