Just Because You Deleted A File Doesn't Mean It's Gone

I ran into a case the other day where someone was reporting an operating system bug. A filesystem was 98% full, but an examination of that filesystem showed that it should only be 25% full.

It isn’t a bug. In order to understand why it isn’t, we need to know something about how files are stored, and then how they are deleted. A good place to start is the basic structure behind a UNIX-style filesystem, the inode. According to Wikipedia:

an inode is a data structure on a traditional Unix-style file system such as UFS. An inode stores basic information about a regular file, directory, or other file system object… Each file has an inode and is identified by an inode number (often referred to as an “i-number” or “ino”) in the file system where it resides.

Inodes store information on files such as user and group ownership, access mode (read, write, execute permissions) and type of file. There is a fixed number of inodes, which indicates the maximum number of files each file system can hold. Typically when a file system is created about 1% of it is devoted to inodes.

Very importantly, inodes only store file contents, not file names. Because file names are stored elsewhere an inode can have multiple names. Enter the hard link, which is a way to give the same file data multiple names inside a filesystem:

“A hard link is a reference, or pointer, to physical data on a storage volume. On most file systems, all named files are hard links. The name associated with the file is simply a label that refers the operating system to the actual data. As such, more than one name can be associated with the same data. Though called by different names, any changes made will affect the actual data, regardless of how the file is called at a later time. Hard links can only refer to data that exists on the same file system.”

On most operating systems a file is marked for deletion when the last name for it is removed from the filesystem:

The process of unlinking disassociates a name from the data on the volume without destroying the associated data. The data is still accessible as long as at least one link that points to it still exists. When the last link is removed, the space is considered free.

This is true for files that are not open. However, if a file is deleted but it is still held open by a process, the space doesn’t actually get marked as free until that process closes that filehandle.

That’s the “bug” — you can delete a file that is still open, but the space isn’t free. So a “du” might show 25% usage but a “df” shows 98%. This happens a lot with big log files. You go in, find the huge file, copy it somewhere, delete the original, and then note that nothing changed. The file isn’t there anymore but the space isn’t free. Lots of people scratch their head, declare it an OS bug, and reboot. A reboot fixes the problem, too, by globally closing every file, but had they restarted the process (or “kill -HUP” it, like syslog) it would have accomplished the same thing, by forcing the software to close and reopen the logs (and freeing the space).

This “bug” is actually a feature for some folks, though: it’s a way to securely use temporary files. A program could create a temporary file, open it, and then delete it so it isn’t visible in the filesystem, but it’s still there and usable to the program. In fact, the tmpfile() system call does this for you.

ONLamp has a great list of secure programming techniques as an excerpt from “Practical UNIX & Internet Security,” which mentions these topics and more. Also, if you aren’t familiar with inodes, directories, etc. those Wikipedia articles linked above are a good starting point. Consider it required reading if you’re a system administrator. 🙂

2 thoughts on “Just Because You Deleted A File Doesn't Mean It's Gone”

  1. I see this all the time with Apache logs, you can truncate the file without killing the process to clear space if you need to.

    > log.txt will reset the file to 0 bytes.

Comments are closed.