ALINK="#FF0000">

"Linux Gazette...making Linux just a little more fun!"


Cleaning Up Your /tmp...The Safe Way

By Guy Geens, ggeens@iname.com


Introduction

Removing temporary files left over in your /tmp directory, is not as easy as it looks like. At least not on a multi-user system that's connected to a network.

If you do it the wrong way, you can leave your system open to attacks that could compromise your system's integrity.

What's eating my disk space?

So, you have your Linux box set up. Finally, you have installed everything you want, and you can have some fun! But wait. Free disk space is slowly going down.

So, you start looking where this disk space is going to. Basically, you will find the following disk hogs:

Of course, there are others, but in this article, I'll concentrate on these three, because you normally don't lose data when you erase the contents. At the most, you will have to wait while the files are regenerated.

The quick and dirty solution

Digging through a few man pages, you come up with something like this:

find /var/catman -type f -atime 7 -print | xargs -- rm -f --

This will remove all formatted man pages that have not been read for 7 days. The find command makes a list of these, and sends them to the xargs. xargs puts these files on the command line, and calls rm -f to delete them. The double dashes are there so that any files starting with a minus will not be misinterpreted as options.

(Actually, in this case, find prints out full path names, which are guaranteed to start with a /. But its better to be safe than sorry.)

This will work fine, and you can place this in your crontab file or one of your start-up scripts.

Note that I used /var/catman in the previous example. You might be thinking ``So, why not use it for /tmp?'' There is a good reason for this. Let me start by elaborating on the difference between /var/catman and /tmp directories. (The situation for /var/tmp is the same as for /tmp. So you can change all instances of /tmp by /var/tmp in the following text.)

Why /var/catman is easy

If you look at the files in /var/catman, you will notice that all the files are owned by the same user (normally man). This user is also the only one who has write permissions on the directories. That is because the only program that ever writes to this directory tree is man . Let's look at /usr/bin/man:

-rwsr-sr-x 1 man man 29716 Apr 8 22:14 /usr/bin/man*

(Notice the two letters `s' in the first column.)

The program is running setuid man, i.e., it takes the identity and privileges of this `user'. (It also takes the group privileges, but that is not really important in our discussion.) man is not a real user: nobody will ever log in with this identity. Therefore, man (the program) can write to directories a normal user cannot write to.

Because you know all files in the directory tree are generated by one program, it is easy to maintain.

And now /tmp

In /tmp, we have a totally different situation. First of all, the file permissions:

drwxrwxrwt 10 root root 3072 May 18 21:09 /tmp/

We can see that everyone can write to this directory: everyone can create, rename or delete files and directories here.

There is one limitation: the `sticky bit' is switched on. (Notice the t at the end of the first column.) This means a user can only delete or rename files owned by himself. This prohibits users peskering each other by removing the other one's temporary files.

If you were to use the simple script above, there are security risks involved. Let me repeat the simple one-line script from above:

find /tmp -type f -atime 7 -print | xargs -- rm -f --

Suppose there is a file /tmp/dir/file, and it is older than 7 days.

By the time find passes this filename to xargs, the directory might have been renamed to something else, and there might even be another directory /tmp/dir.

(And then I didn't even mention the possibility of embedded newlines. But that can be easily fixed by using -print0 instead of -print.)

All this could lead to a wrong file being deleted, Either intentionally or by accident. By clever use of symbolic links, an attacker can exploit this weakness to delete some important system files.

For an in-depth discussion of the problem, see the Bugtraq mailing list archives. (Thread ``[linux-security] Things NOT to put in root's crontab'').

This problem is inherently linked with find's algoritm: there can be a long time between the moment when find generates a filename internally and when it is passed on to the next program. This is because find recurses subdirs before it tests the files in a particular directory.

So how do we get around this?

A first idea might be:

find ... -exec rm {} \;

but unfortunately, this suffers from the same problem, as the `exec' clause passes on the full pathname.

In order to solve the problem, I wrote this perl script , which I named cleantmp.

I will explain how it works, and why it is safer than the aforementioned scripts.

First indicate I'm using the File::Find module. After this statement, I can call the &find subroutine.

use File::Find;

Then do a chroot to /tmp. This changes the root directory for the script to /tmp. It will make sure the script can't access any files outside of this hierarchy.

Perl only allows a chroot when the user is root. I'm checking for this case, to facilitate testing.

# Security measure: chroot to /tmp

$tmpdir = '/tmp/';

chdir ($tmpdir) || die "$tmpdir not accessible: $!";

if (chroot($tmpdir)) { # chroot() fails when not run by root 

($prefix = $tmpdir) =~ s,/+$,,;

$root = '/';

$test = 0;

} else {

# Not run by root - test only

$prefix = '';

$root = $tmpdir;

$test = 1;

}

Then we come to these lines:

&find(\&do_files, $root);

&find(\&do_dirs, $root);

Here, I let the find subroutine recurse through all the subroutines of /tmp. The functions do_files and do_dirs are called for each file found. There are two passes over the directory tree: one for files, and one for directories.

Now we have the function do_files.

sub do_files {

(($dev,$ino,$mode,$nlink,$uid,$gid) = lstat($_)) &&

(-f _ || -l _ ) &&

(int(-A _) > 3) &&

! /^\.X.*lock$/ &&

&removefile ($_) && push @list, $File::Find::name; 
}

Basically, this is the output of the find2perl program, with a little changes.

This routine is called with $_ set to the filename under inspection, and the current directory is the one in which it resides. Now let's see what it does. (In case you don't know perl: the && operator short-circuits, just like in C.)

  1. The first line gets the file's parameters from the kernel;
  2. If that succeeds, we check if it is a regular file or a symbolic link (as opposed to a directory or a special file);
  3. Then, we test if the file is old enough to be deleted (older than 3 days);
  4. The fourth line makes sure X's lockfiles (of the form /tmp/.X0-lock are not removed;
  5. The last line will remove the file, and keep a listing of all deleted files.

The removefile subroutine merely tests if the $test flag is set, and if not, deletes the file.

The do_dirs subroutine is very similar to this one, and I won't go into the details.

A few remarks

I use the access time to determine the file's age. The reason for this is simple. I sometimes unpack archives into my /tmp directory. When it creates files, tar gives them the date they had in the archive as the modification time. In one of my earlier scripts, I did test on the mtime. But then, I was looking in an unpacked archive, at the same time when cron started to clean up. (Hey?? Where did my files go?)

As I said before, the script checks for some special files (and also directories in do_dirs). This is because they are important for the system. If you have a separate /tmp partition, and have quota installed on it, you should also check for quota's support files - quota.user and quota.group.

The script also generates a list of all deleted files and directories. If you don't want this output, send the output to /dev/null.

Why this is safe

The main difference with the find constructions I have shown before is this: the file to be deleted is not referenced by its full pathname. If the directory is renamed while the script is scanning it, this doesn't have any effect: the script won't notice this, and delete the right files.

I have been thinking about weaknesses, and I couldn't find one. Now I'm giving this to you for inspection. I'm convinced that there are no hidden security risks, but if you do find one, let me know.


Copyright © 1997, Guy Geens
Published in Issue 18 of the Linux Gazette, June 1997


[ TABLE OF CONTENTS ] [ FRONT PAGE ]  Back  Next