In issue 18, I described how to avoid certain security risks when removing files from your /tmp directory. I have received some letters as feedback, and I'll summarize the issues here. Some people said they didn't need the extra security. Well, they're free not to use my script.
Michael Veksler <[email protected]> told me he was worried about the use of the access time to determine the file's age. His main concern was that files could be ``accidently'' touched by
find ... | xargs grep ...constructions. Personally, I don't have this problem, as I tend to restrict the domain of my find sweeps.
As I said in my first article, it's a personal taste. I frequently unpack archives in my /tmp hierarchy. And I want to be certain the files will stay there until I don't need them anymore.
To me, 3 days after last looking at the file seems a reasonable delay for that.
But recently, I started using afio for transporting files that won't fit on one floppy. And afio Remembers the access time during archiving, and also sets this date while unpacking. This could limit the lifespan of my files if I don't look at them immediately. (As a sidenote, zip also sets the access time.)
Obviously, there is one other possibility I neglected: using ctime (inode change time). It is not possible to set this to an arbitrary value, and it doesn't change as easily as the access time.
Perl has a rather large memory footprint, and is not available on every site. Therefore, Francois Wautier < suggested:
cp -p /bin/rm-static /usr/bin/find-static /tmp chroot /tmp /find-static ... -exec /rm-static {} \; rm /tmp/rm-static /tmp/find-static
rm-static and find-static are statically compiled versions of rm and find, respectively. The -p flag ensures the resulting binary is owned by root, closing one security risk. (A user might have created her own /tmp/rm-static with the intend of changing the binary.)
This gives rise to a new set of race conditions, although they aren't as easy to exploit as the
find ... | xargs rmsecurity hole described in my first article.
In general, I would advise against executing arbitrary files with root permissions, especially if they are residing in a publicly writeable directory (like /tmp). (It is also related to the reason why `.' should never be in root's path.
This leads me to a real security risk:
(This one I found myself.)
I recently upgraded to perl 5.004. After the upgrade, I noticed my cleantmp script started emitting warnings about not finding the pwd program.
I looked into the perl module code, and it uses pwd to determine the current directory.
The script itself doesn't have problems with the missing binary, as I'm using absolute paths everywhere. But it opens a huge security hole: An executable called pwd in the right place in the /tmp tree can give a user a process executing with root permissions.
In this case, the chroot decreases security, instead of increasing it.
For this reason, I have decided to remove the chroot from the script entirely. That way, I can be sure only trusted binaries are executed.
In the first version of my script, I demonstrated how to exclude some files from being deleted. I obviously forgot one thing: a user could create files or directories with the same names, and they would never be deleted.
The solution is easy: test the owner of the file, and if it isn't root, just delete the file.
Here is a link to the new script. Comments are welcome.