tags:

views:

682

answers:

8

I got some huge files I need to parse, and people have been recommending mmap because this should avoid having to allocate the entire file in-memory.

But looking at 'top' it does look like I'm opening the entire file into the memory, so I think I must be doing something wrong. 'top shows >2.1 gig'

This is a code snippet that shows what I'm doing.

Thanks

#include <stdio.h>
#include <stdlib.h>
#include <err.h>
#include <fcntl.h>
#include <sysexits.h>
#include <unistd.h>
#include <sys/stat.h>
#include <sys/types.h>
#include <sys/mman.h>
#include <cstring>
int main (int argc, char *argv[] ) {
  struct stat sb;
  char *p,*q;
  //open filedescriptor
  int fd = open (argv[1], O_RDONLY);
  //initialize a stat for getting the filesize
  if (fstat (fd, &sb) == -1) {
    perror ("fstat");
    return 1;
  }
  //do the actual mmap, and keep pointer to the first element
  p =(char *) mmap (0, sb.st_size, PROT_READ, MAP_SHARED, fd, 0);
  q=p;
  //something went wrong
  if (p == MAP_FAILED) {
    perror ("mmap");
    return 1;
  }
  //lets just count the number of lines
  size_t numlines=0;
  while(*p++!='\0')
    if(*p=='\n')
      numlines++;
  fprintf(stderr,"numlines:%lu\n",numlines);
  //unmap it
  if (munmap (q, sb.st_size) == -1) {
    perror ("munmap");
    return 1;
  }
  if (close (fd) == -1) {
    perror ("close");
    return 1;
  }
  return 0;
}
+16  A: 

No, what you're doing is mapping the file into memory. This is different to actually reading the file into memory.

Were you to read it in, you would have to transfer the entire contents into memory. By mapping it, you let the operating system handle it. If you attempt to read or write to a location in that memory area, the OS will load the relevant section for you first. It will not load the entire file unless the entire file is needed.

That is where you get your performance gain. If you map the entire file but only change one byte then unmap it, you'll find that there's not much disk I/O at all.

Of course, if you touch every byte in the file, then yes, it will all be loaded at some point but not necessarily in physical RAM all at once. But that's the case even if you load the entire file up front. The OS will swap out parts of your data if there's not enough physical memory to contain it all, along with that of the other processes in the system.

The main advantages of memory mapping are:

  • you defer reading the file sections until they're needed (and, if they're never needed, they don't get loaded). So there's no big upfront cost as you load the entire file. It amortises the cost of loading.
  • The writes are automated, you don't have to write out every byte. Just close it and the OS will write out the changed sections. I think this also happens when the memory is swapped out as well (in low physical memory situations), since your buffer is simply a window onto the file.

Keep in mind that there is most likely a disconnect between your address space usage and your physical memory usage. You can allocate an address space of 4G (ideally, though there may be OS, BIOS or hardware limitations) in a 32-bit machine with only 1G of RAM. The OS handles the paging to and from disk.

And to answer your further request for clarification:

Just to clarify. So If I need the entire file, mmap will actually load the entire file?

Yes, but it may not be in physical memory all at once. The OS will swap out bits back to the filesystem in order to bring in new bits.

But it will also do that if you've read the entire file in manually. The difference between those two situations is as follows.

With the file read into memory manually, the OS will swap parts of your address space (may include the data or may not) out to the swap file. And you will need to manually rewrite the file when your finished with it.

With memory mapping, you have effectively told it to use the original file as an extra swap area for that file/memory only. And, when data is written to that swap area, it affects the actual file immediately. So no having to manually rewrite anything when you're done and no affecting the normal swap (usually).

It really is just a window to the file:

                        memory mapped file image

paxdiablo
Just to clarify. So If I need the entire file, mmap will actually load the entire file?
monkeyking
Yes, see the update.
paxdiablo
A: 

The system will certainly try to put all your data in physical memory. What you will conserve is swap.

bmargulies
wrong. the VM will use RAM to make the file available; but it will be swapped out as soon as there's some memory pressure. It's almost exactly like just using RAM as a cache for the file.
Javier
Wrong. It will never use swap space for a read-only mapping. It will do I/O to swap it in, but you won't use space.
bmargulies
+1  A: 

top has many memory-related columns. Most of them are based on the size of the memory space mapped to the process; including any shared libraries, swapped out RAM, and mmapped space.

Check the RES column, this is related to the physical RAM currently in use. I think (but not sure) it would include the RAM used to 'cache' the mmap'ped file

Javier
+1  A: 

"allocate the whole file in memory" conflates two issues. One is how much virtual memory you allocate; the other is which parts of the file are read from disk into memory. Here you are allocating enough space to contain the whole file. However, only the pages that you touch will actually be changed on disk. And, they will be changed correctly no matter what happens with the process, once you have updated the bytes in the memory that mmap allocated for you. You can allocate less memory by mapping only a section of the file at a time by using the "size" and "offset" parameters of mmap. Then you have to manage a window into the file yourself by mapping and unmapping, perhaps moving the window through the file. Allocating a big chunk of memory takes appreciable time. This can introduce an unexpected delay into the application. If your process is already memory-intensive, the virtual memory may have become fragmented and it may be impossible to find a big enough chunk for a large file at the time you ask. It may therefore necessary to try to do the mapping as early as possible, or to use some strategy to keep a large enough chunk of memory available until you need it.

However, seeing as you specify that you need to parse the file, why not avoid this entirely by organizing your parser to operate on a stream of data? Then the most you will need is some look-ahead and some history, instead of needing to map discrete chunks of the file into memory.

Permaquid
+2  A: 

You may have been offered the wrong advice.

Memory mapped files (mmap) will use more and more memory as you parse through them. When physical memory becomes low, the kernel will unmap sections of the file from physical memory based on its LRU (least recently used) algorithm. But the LRU is also global. The LRU may also force other processes to swap pages to disk, and reduce the disk cache. This can have a severely negative affect on the performance on other processes and the system as a whole.

If you are linearly reading through files, like counting the number of lines, mmap is a bad choice, as it will fill physical memory before release memory back to the system. It would be better to use traditional I/O methods which stream or read in a block at a time. That way memory can be released immediately afterwards.

If you are randomly accessing a file, mmap is an okay choice. But it's not optimal since you would still be relying the kernel's general LRU algorithm, but it’s faster to use than writing your caching mechanism.

In general, I would never recommend anyone use mmap, except for some extreme performance edge cases - like accessing the file from multiple processes or threads at the same time, or when the file is small in relationship to the amount of free available memory.

tgiphil
Meh. You can do about 10 tree lookups using mmap in the time it takes to pread a B+tree structure block by block.
Zan Lynx
Not necessarily true. The performance of the first read IO will be nearly identical (for all practical purposes) between mmap and pread – both have to read it from media. The issue is with subsequent reads. Mmap will use the kernel’s memory eviction LRU algorithm to decide what pages to map out. With Pread will the IO subsystem to decide what blocks to remove from the cache (if any). Neither approach is highly efficient in terms of releasing unused memory resources. Thus application relying on mmap may reduce the performance and efficiency of the entire system by staving memory resources.
tgiphil
A: 

You need to specify a size smaller than the total size of the file in the mmap call, if you don't want the entire file mapped into memory at once. Using the offset parameter, and a smaller size, you can map in "windows" of the larger file, one piece at a time.

If your parsing is a single pass through the file, with minimal lookback or look-forward, then you won't actually gain anything by using mmap instead of standard library buffered I/O. In the example you gave of counting the newlines in the file, it'd be just as fast to do that with fread(). I assume that your actual parsing is more complex, though.

If you need to read from more than one part of the file at a time, you'll have to manage multiple mmap regions, which can quickly get complicated.

Mark Bessey
A: 

A little off topic.

I don't quite agree with Mark's answer. Actually mmap is faster than fread.

Despite of taking advantage of the system's disk buffer, fread also has an internal buffer, and in addition, the data will be copied to the user-supplied buffer as it is called.

On the contrary, mmap just return a pointer to the system's buffer. So there is a two-memory-copies-saving.

But using mmap a little dangerous. You must make sure the pointer never goes out of the file, or there will be a segment fault. While in this case fread merely returns zero.

Iamamac
I actually have done benchmarking that shows that (on Mac OS X, anyway) there's nearly no difference in throughput between windowed mmap and fread for straight-through reading. Yes, using the high-level library, the data does get copied (up to three times), but the time to copy the data is negligible compared to the actual I/O time. I usually use the highest-level interface that's appropriate.
Mark Bessey
@Mark: Agree with you when the file is read at the first time. However, if the program reads the file more than once, or the program runs repeatedly (web server for example), there will be huge difference. (Changing `fread` to `mmap` made the whole program 50% faster in one of my experience)
Iamamac
+1  A: 

You can also use fadvise(2) (and madvise(2), see also posix_fadvise & posix_madvise ) to mark mmaped file (or its parts) as read-once.

#include <sys/mman.h> 

int madvise(void *start, size_t length, int advice);

The advice is indicated in the advice parameter which can be

MADV_SEQUENTIAL 

Expect page references in sequential order. (Hence, pages in the given range can be aggressively read ahead, and may be freed soon after they are accessed.)

Portability: posix_madvise and posix_fadvise is part of ADVANCED REALTIME option of IEEE Std 1003.1, 2004. And constants will be POSIX_MADV_SEQUENTIAL and POSIX_FADV_SEQUENTIAL.

osgx