tags:

views:

1730

answers:

4

Hi.

Does someone know how well the following 3 compare in terms of speed:

  • shared memory

  • tmpfs (/dev/shm)

  • mmap (/dev/shm)

Thanks!

A: 

tmpfs is the slowest. Shared memory and mmap are the same speed.

could you please explain... Thank you
hhafez
Indeed, it seems that tmpfs actually powers shared memory? And I presume, mmap is only fast when using the tmpfs as the underlying transport?
SyRenity
+1  A: 

"It depends." In general, they're all in-memory and dependent upon system implementation so the performance will be negligible and platform-specific for most uses. If you really care about performance, you should profile and determine your requirements. It's pretty trivial to replace any one of those methods with another.

That said, shared memory is the least intensive as there are no file operations involved (but again, very implementation-dependent). If you need to open and close (man/unmap) repeatedly, lots of times, then it could be significant overhead.

Cheers!
Sean

brlcad
+1  A: 

By "Shared memory" you mean System V shared memory, right?

I think Linux mmap's a hidden tmpfs when you use this, so it's effectively the same as mmaping a tmpfs.

Doing file I/O on tmpfs is going to have a penalty... mostly (there are special cases where it might make sense, such as >4G in a 32-bit process)

MarkR
How so? If mmap uses shared memory, so the performance should be the same?
SyRenity
+4  A: 

Read about tmpfs here. The following is copied from that article, explaining the relation between shared memory and tmpfs in particular.

1) There is always a kernel internal mount which you will not see at
   all. This is used for shared anonymous mappings and SYSV shared
   memory. 

   This mount does not depend on CONFIG_TMPFS. If CONFIG_TMPFS is not
   set the user visible part of tmpfs is not build, but the internal
   mechanisms are always present.

2) glibc 2.2 and above expects tmpfs to be mounted at /dev/shm for
   POSIX shared memory (shm_open, shm_unlink). Adding the following
   line to /etc/fstab should take care of this:

    tmpfs /dev/shm tmpfs defaults 0 0

   Remember to create the directory that you intend to mount tmpfs on
   if necessary (/dev/shm is automagically created if you use devfs).

   This mount is _not_ needed for SYSV shared memory. The internal
   mount is used for that. (In the 2.3 kernel versions it was
   necessary to mount the predecessor of tmpfs (shm fs) to use SYSV
   shared memory)

So, when you actually use POSIX shared memory (which i used before, too), then glibc will create a file at /dev/shm, which is used to share data between the applications. The file-descriptor it returns will refer to that file, which you can pass to mmap to tell it to map that file into memory, like it can do with any "real" file either. The techniques you listed are thus complementary. They are not competing. Tmpfs is just the file-system that provides in-memory files as an implementation technique for glibc.

As an example, there is a process running on my box currently having registered such a shared memory object:

# pwd
/dev/shm
# ls -lh
insgesamt 76K
-r-------- 1 js js 65M 24. Mai 16:37 pulse-shm-1802989683
#
Johannes Schaub - litb
So the speed is the same? How mmap-opened files are compared to usual glibc opened files?
SyRenity