I have scripts that make hundreds of quick succession, small, temp files needing to be created and very soon read back in, then unlinked.
My testing shows little if any performance difference by putting said files in /tmp
(to disk) or into /dev/shm
(filesystem-level shared memory) on Linux even under moderate load. I attribute this to the filesystem cache.
Granted the disk will eventually get hit with the fileystem actions, but on multiple small write-read temp files, why would you (not) recommend /dev/shm
over disk-backed directory? Have you noticed big performance increases with shared memory directory over a cached VFS?
/dev/shm
is intended for a very special purpose, not for files to be put to by arbitrary programs.
In contrast, /tmp
is exactly made for this. On my systems, /tmp
is a tmpfs
as well, in contrast to /var/tmp
which is designed for putting larger files, potentially staying longer.