Use a RAM Disk to Improve Disk Access Times

This is post #15 in my December 2013 series about Linux Virtual Machine Performance Tuning. For more, please see the tag “Linux VM Performance Tuning.”

One of the biggest things folks in IT worry about is data loss. We go to enormous lengths to protect our data, using backups, snapshots, remote replication, rsync, scp, temporary copies in our own home directories… you name it. The thing is, as we look at our systems we sometimes discover that our applications do a lot of writing of temporary files. These temporary files often don’t need any particular protection because they’re transient, yet we write them to our expensive, already overtaxed disk arrays, commit the writes over long distances to our DR sites, and take up space in our backup systems with them.

A great example of this is folks that run Nagios installations in virtual machines. Nagios writes to several files (status.dat, object.cache) constantly, but those files get rebuilt upon restart so there’s no need to protect them with backups or replication. Or at all, frankly. What if we could get Nagios to stop beating our storage up, by putting the writes somewhere that’s incredibly fast but not on the SAN, but also not protected against data loss?

We can. It’s called a RAM disk, and it trades some local memory for disk I/O.

The RAM Disk Giveth, the RAM Disk Taketh Away

You might not realize it, but many Linux hosts already have a RAM disk at /dev/shm. Its default size is half of the available physical RAM. In this case my VM has 24 GB of RAM allocated to it:

$ df /dev/shm
Filesystem 1K-blocks Used Available Use% Mounted on
tmpfs 12294916 0 12294916 0% /dev/shm

As we store objects in there we see our free RAM drop. Here’s the output of ‘free’ prior to me storing a 10 GB file in /dev/shm:

$ free
 total used free shared buffers cached
Mem: 24589832 24200748 389084 0 2035452 14904840
-/+ buffers/cache: 7260456 17329376
Swap: 4194296 240 4194056

And here’s free after I store a 10 GB file there:

$ free
 total used free shared buffers cached
Mem: 24589832 24322928 266904 0 976296 17643784
-/+ buffers/cache: 5702848 18886984
Swap: 4194296 54436 4139860

And here’s free again once I deleted the file:

$ free
 total used free shared buffers cached
Mem: 24589832 14277424 10312408 0 979172 7555172
-/+ buffers/cache: 5743080 18846752
Swap: 4194296 63784 4130512

The formatting here stinks, sorry, but I’ve highlighted in bold the two numbers we care about: cached data and swap usage. Storing a bunch of data in half of system RAM has unintended side effects. You can see that by looking in the “cached” column, which is where use of /dev/shm appears. I started with about 14 GB of cache data, I stored 10 GB of stuff, and it rose to 17 GB. Then I deleted the file and it dropped to 7 GB, which makes sense. That means that, in the process of storing 10 GB of data, I evicted about 7 GB of disk cache data. That’s one of the tradeoffs – you’re competing with your disk cache for use of the RAM. Did my application performance suffer because of it? In my case I don’t know, but I’d want to keep an eye on that.

The other big observation here is swap. Notice that it went from 240 KB in use to 63784 KB? That’s because the system opted to preserve some file cache over application data, so it paged data out to the swap file, causing a burst of disk I/O, too. This process is governed by the kernel parameter vm.swappiness, which I wrote about earlier in this series, and it is something to be aware of because we’re trying to avoid disk I/O, not create more.

The last big point is that when I reboot my VM all that data disappears. That’s an important point. Don’t put things there you care about. Some folks have also invented schemes that use rsync and cron to periodically synchronize their RAM disks to a more permanent data store, which is interesting. I’ve even seen discussions about using LVM to do snapshots, etc. but that sounds really complicated, and complexity is something I try to reduce.

Gotcha. So what do you suggest?

I don’t like the idea of something using up to half my system RAM, especially in cloud environments where I’m paying for everything à la carte. What I’d recommend is configuring your own RAM disk at a mount point you know about, with a static size. By putting something like:

none /ramdisk tmpfs defaults,size=256m 1 2

in your /etc/fstab you can have a 256 MB RAM disk at /ramdisk. This way the size is bounded, so you know you won’t ever exceed 256 MB of data there. I’d also suggest adjusting vm.swappiness to avoid doing any paging to disk.

I’ve seen a well-placed RAM disk drop disk I/O on a VM from 3500 IOPS to 100 IOPS. I’ve also seen people store important data in their RAM disks, just to have a reboot wipe it all away. This is a powerful tool, so be creative but safe, and remember that memory management is a balance between RAM disk, application data, and file cache.

2 thoughts on “Use a RAM Disk to Improve Disk Access Times”

  1. Pingback: Tech Blast #02

Comments are closed.