Does someone know how well the following 3 compare in terms of speed:
shared memory
tmpfs (/dev/shm)
mmap (/dev/shm)
Thanks!
Read about tmpfs
here. The following is copied from that article, explaining the relation between shared memory and tmpfs
in particular.
1) There is always a kernel internal mount which you will not see at
all. This is used for shared anonymous mappings and SYSV shared
memory.
This mount does not depend on CONFIG_TMPFS. If CONFIG_TMPFS is not
set the user visible part of tmpfs is not build, but the internal
mechanisms are always present.
2) glibc 2.2 and above expects tmpfs to be mounted at /dev/shm for
POSIX shared memory (shm_open, shm_unlink). Adding the following
line to /etc/fstab should take care of this:
tmpfs /dev/shm tmpfs defaults 0 0
Remember to create the directory that you intend to mount tmpfs on
if necessary (/dev/shm is automagically created if you use devfs).
This mount is _not_ needed for SYSV shared memory. The internal
mount is used for that. (In the 2.3 kernel versions it was
necessary to mount the predecessor of tmpfs (shm fs) to use SYSV
shared memory)
So, when you actually use POSIX shared memory (which i used before, too), then glibc
will create a file at /dev/shm
, which is used to share data between the applications. The file-descriptor it returns will refer to that file, which you can pass to mmap
to tell it to map that file into memory, like it can do with any "real" file either. The techniques you listed are thus complementary. They are not competing. Tmpfs
is just the file-system that provides in-memory files as an implementation technique for glibc
.
As an example, there is a process running on my box currently having registered such a shared memory object:
# pwd
/dev/shm
# ls -lh
insgesamt 76K
-r-------- 1 js js 65M 24. Mai 16:37 pulse-shm-1802989683
#