tags:

views:

176

answers:

9

One strategy that I though of myself is allocating 5 megabytes of memory (or whatever number you feel necessary) at the program startup.

Then when at any point program's malloc() returns NULL, you free the 5 megabytes and call malloc() again, which will succeed and let the program continue running.

What do you think about this strategy?

And what other strategies do you know?

Thanks, Boda Cydo.

+12  A: 

Handle malloc failures by exiting gracefully. With modern operating systems, pagefiles, etc you should never pre-emptively brace for memory failure, just exit gracefully. It is unlikely you will ever encounter out of memory errors unless you have an algorithmic problem.

Also, allocating 5MB for no reason at startup is insane.

While allocating 5MB for no reason isn't a good idea, having a usable reserved block of memory to assist in that graceful exit isn't a bad idea.MS reserves a small static buffer for low-level error reporting. This gives them a location that USER32 can use to format a message and display it to the user. By that point, recovery is not an option, but it may help to identify and prevent the issue in future. Something similar in the application might be useful.
Ron Ruble
"It is unlikely you will ever encounter out of memory errors unless you have an algorithmic problem." This sounds a bit like "nobody needs more than 640k RAM". If you're working on a 32bit machine, and your process needs 1 GB of data, you can easily run into a situation where you can't allocate 5 MB due to memory fragmentation.
nikie
@nikie, that's just not true. http://en.wikipedia.org/wiki/Virtual_memory
You might add that part of exiting gracefully should be ensuring that data on disk is in a consistent state (for example if you were in the middle of writing a file, or if your application has a database open), and possibly writing out a file that can be used to recover the current session's data when the application is restarted.
R..
@evilclown - I believe that +nikie+ was talking about fragmentation of the heap's free list, which can happen when you malloc and free lots of arbitrarily-sized data. Since C uses pointers, it is very limited in how it can coalesce the free space. You may have plenty of free memory, just not enough for a particular contiguous allocation.
kdgregory
i think modern virtual memory/paging algorithms conceal fragmentation rather successfully. for example, void *x may point to 0x400123F0 and be paged to disk. later, it may be swapped into ram at address 0x5383120F, the pointer being translated by the OS so the program is oblivious. should realloc not be able to get a contiguous block, a new page may be created, and the 0x400123F0 will point to the new contiguous block transparently. a more challenging problem would be to make malloc return NULL.
@evilclown: memory fragmentation has nothing to do with physical memory, but fragmentation of the **virtual** address space. You cannot move objects to different virtual addresses because then the pointers to them would no longer be valid.
R..
+2  A: 

As a method of testing that you handle out of memory situations gracefully, this can be a reasonably useful technique.

Under any other circumstance, it sounds useless at best. You're causing the out of memory situation to happen, then fixing the problem by freeing memory you didn't need to start with.

Jerry Coffin
+1  A: 

It actually depends on a policy you'd like to implement, meaning, what is the expected behavior of your program when it's out of memory.

Great solution would be to allocate memory during initialization only and never during runtime. In this case you'll never run out of memory if the program managed to start.

Another could be freeing resources when you hit memory limit. It'd be difficult to implement and test.

Keep in mind that when you are getting NULL from malloc it means both physical and virtual memory have no more free space, meaning your program is swapping all the time, making it slow and the computer unresponsive.

You actually need to make sure (by estimated calculation or by checking the amount of memory in runtime) that the expected amount of free memory the computer has is enough for your program.

Drakosha
`while (malloc(10000000));` This program will quickly lead to `malloc` failing but will not swap at all, because most of the allocated space consists of references to the zero-page, and occupies no physical memory.
R..
+1  A: 

Generally the purpose of freeing the memory is so that you have enough to report the error before you terminate the program.

If you are just going to keep running, there is no point in preallocating the emergency reserve.

EvilTeach
A: 

Most of modern OSes in default configuration allow memory overcommit, so your program wouldn't get NULL from malloc() at all or at least until it somehow (by error, I guess) exhausted all available address space (not memory). And then it writes some perfectly legal memory location, gets a page fault, there is no memory page in backing store and BANG (SIGBUS) - you dead, and there is no good way out there.

So just forget about it, you can't handle it.

blaze
While this is usually true, any good OS can be configured not to overcommit, and will be configured that way for any system that requires maximal reliability (e.g. any realtime system - would you really want overcommit on your heart monitor??).
R..
Would you really want malloc() in it at all? :)
blaze
+1  A: 

Yeah, this doesn't work in practice. First for a technical reason, a typical low-fragmentation heap implementation doesn't make large free blocks available for small allocations.

But the real problem is that you don't know why you ran out of virtual memory space. And if you don't know why then there's nothing you can do to prevent that extra memory from being consumed very rapidly and still crash your program with OOM. Which is very likely to happen, you've already consumed close to two gigabytes, that extra 5 MB is a drop of water on a hot plate.

Any kind of scheme that switches the app into 'emergency mode' is very impractical. You'll have to abort running code so that you can stop, say, loading an enormous data file. That requires an exception. Now you're back to what you already had before, std::badalloc.

Hans Passant
Are you sure about the first paragraph? I would expect a good malloc implementation to try all possible ways of satisfying the request before returning failure. It would be pretty ridiculous for `malloc(1)` to fail but `malloc(100000)` to succeed... If nothing else, `malloc` could simply internally do `while (failed) { size*=2; retry(); }`.
R..
Yeah, nothing good happens when it fragments the heap, trying to delay the inevitable. Then lets the app deal with a mess that's *very* hard to recover from.
Hans Passant
+1  A: 

"try-again-later". Just because you're OOM now, doesn't mean you will be later when the system is less busy.

void *smalloc(size_t size) {
   for(int i = 0; i < 100; i++) {
      void *p = malloc(size);
      if(p) 
        return p;
      sleep(1);
    }
   return NULL;
}

You should of course think a lot about where you employ such a strategy as it is quite hidious, but it has saved some of our systems in various cases

nos
Success or failure of malloc usually has more to do with the current process's address space than with the machine's state. In most cases, your loop will be an infinite one. And I doubt anyone will appreciate the application completely freezing rather than exiting and (hopefully) saving recovery data.
R..
This depends a lot on your system (e.g. an embedded linux device with no swap and overcommit disabled is different from a windows 7 work station). And e.g. while one thread is off doing heavy work you might be temporary out of memory/address space. Not all applications are user supervised - so there's no user to annoy by periodically "freezes".
nos
+1  A: 

For the last few years, the (embedded) software I have been working with generally does not permit the use of malloc(). The sole exception to this is that it is permissible during the initialization phase, but once it is decided that no more memory allocations are allowed, all future calls to malloc() fail. As memory may become fragmented due to malloc()/free() it becomes difficult at best in many cases to prove that future calls to malloc() will not fail.

Such a scenario might not apply to your case. However, knowing why malloc() is failing can be useful. The following technique that we use in our code since malloc() is not generally available might (or might not) be applicable to your scenario.

We tend to rely upon memory pools. The memory for each pool is allocated during the transient startup phase. Once we have the pools, we get an entry from the pool when we need it, and release it back to the pool when we are done. Each pool is configurable, and is usually reserved for a particular object type. We can track the usage of each over time. If we run out of pool entries, we can find out why. If we don't, we have the option of making our pool smaller and save some resources.

Hope this helps.

Sparky
+1 for covering the only way to write truly robust applications.
R..
@Sparky: A useful memory-allocation approach in embedded systems is LIFO alloc/free; freeing a pointer returned by an allocation returns that memory <i>and everything allocated thereafter</i>. If one's heap is pre-allocated (rather than sharing memory with the stack) it may be readily adapted to allocate memory from the top and the bottom, offering two LIFO allocators for the same shared pool.
supercat
A: 

I want to second the sentiment that the 5mb pre-allocation approach is "insane", but for another reason: it's subject to race conditions. If the cause of memory exhaustion is within your program (virtual address space exhausted), another thread could claim the 5mb after you free it but before you get to use it. If the cause of memory exhaustion is lack of physical resources on the machine due to other processes using too much memory, those other processes could claim the 5mb after you free it (if the malloc implementation returns the space to the system).

Some applications, like a music or movie player, would be perfectly justified just exiting/crashing on allocation failures - they're managing little if any modifiable data. On the other hand, I believe any application that is being used to modify potentially-valuable data needs to have a way to (1) ensure that data already on disk is left in a consistent, non-corrupted state, and (2) write out a recovery journal of some sort so that, on subsequent invocations, the user can recover any data lost when the application was forced to close.

As we've seen in the first paragraph, due to race conditions your "malloc 5mb and free it" approach does not work. Ideally, the code to synchronize data and write recovery information would be completely allocation-free; if your program is well-designed, it's probably naturally allocation-free. One possible approach if you know you will need allocations at this stage is to implement your own allocator that works in a small static buffer/pool, and use it during allocation-failure shutdown.

R..
@R..: Agree. I can see some usefulness to pre-allocating the 5MB if one can use the memory directly (without having to release it first) and if 5MB is the actual amount of data that will be needed for a safe shutdown. That doesn't sound like the case here.
supercat