views:

795

answers:

10

I am writing some guidelines for the company and I need to answer some tough questions. This one is quite difficult.

The solution can be:

  1. Not track at all. Make sure that objects are allocated using new which will throws an exception when allocation failed. The application will die, and it is not a big deal. PRO - code usually can be very clean.

  2. Track memory allocation failures and reports it accordingly, just like any errors (such as file access error).

Honestly, I have to write much more code if we go with option 2. For example, many std::tring operations involve memory allocation. Such as

std::string str1, str2; str1 = str2; str += str2;

Our software will always run major platforms, not embedded. Somehow I think that option 1 is the way to go. What's your opinion?

+5  A: 

Unless you are doing some pretty impressive allocations, you're unlikely to hit an allocation failure in a 32-bit virtual memory space. (And even less likely in a 64 bit syste). It's probably better to just die if you run out of memory. In the rare case that something goes wrong and you do run out of memory, you're unlikely to be able to report an error anyway. (Unless, of course, you specifically put aside a reserve of memory beforehand to free in case of an allocation failure.)

One possibility - allocate a sizable chunk of memory for emergency use only, then catch out of memory exceptions at a fairly high level in your app, free the emergency memory, log what happened, and then die.

Eclipse
I like that answer, particularly the second paragraph.
David Thornley
+8  A: 

In general, don't check for memory allocation failures on small allocations. Inevitably, it's more trouble than it's worth, and it's hard to get right anyway. And most of the time there's nothing you can do about it. On very large memory operations, if you can do something about it, it might be worth considering things on a case-by-case basis.

This is well covered by C++ Gotchas: Avoiding Common Problems in Coding and Design. In particular, see Gotcha #61: Checking for Allocation Failure:

Some questions should just not be asked, and whether a particular memory allocation has succeeded is one of them.

[...] Error-checking code that's this involved is rarely entirely correct initially and is almost never correct after a period of maintenance. A better approach is not to check at all:

String **array = new String *[n];
for( String **p = array; p < array+n; ++p )
  *p = new String;

This code is shorter, clearer, faster, and correct. The standard behavior of new is to throw a bad_alloc exception in the event of allocation failure. This allows us to encapsulate error-handling code for allocation failure from the rest of the program, resulting in a cleaner, clearer, and generally more efficient design.

John Feminella
See http://www.ddj.com/cpp/184401393 for a better article on this subject.
Greg Rogers
A: 

nothing wrong on just dying when allocation fails. quite probably, everything else after that would fail anyway.

just verify that you really get an exception on failure to allocate, and not later because using a NULL pointer.

Javier
In C++, new throws an exception except when called with (nothrow). Do a global search for "nothrow", "malloc", "realloc", and "calloc", and you should be good to go.
David Thornley
+1  A: 

If you catch your out of memory exception you can get the stack trace where the failure occurred, which 99% of the time is all you need to diagnose the problem. That's then your log.

You may which to lock off a 1MB buffer or something which you can use for generating this information, either by using it directly or releasing it so memory becomes available while creating the log.

Andrew Grant
+9  A: 

I do trap memory allocations, but only occasionally.

In particular, I will occasionally trap a memory allocation where:

  • I know the amount of memory being allocated is very large
  • There is something I can do about it if the allocation fails (ie: gracefully handle the condition with a notice to the user, etc)

That being said, those two things are pretty rare - usually I just end up letting the program die from the exception.

Reed Copsey
+2  A: 

Yes I do, but it depends on your application. For mission critical applications, a lot of server apps, and services running on a client system, you absolutely should not crash in OOM cases - out of memory conditions can arise temporarily and the user would expect your code to keep running after the problem clears up.

Imagine if one memory hog starts running on your system, and suddenly your shell, your web browser, applications, etc., fail because of it. That would not be a good a user experience at all.

On the other hand, if yours is a one-off tool where OOM would mean you can't accomplish the one thing the user is asking of you, failing is probably ok.

Regardless, even for the unhandled case you should add a top level catch that can do some logging in the event the OOM is actually caused by your code.

Michael
Actually, your "bad user experience" is a reality for all modern Linux distributions. If kernel memory fills up, the "Out-of-Memory Killer" or "OOM Killer" will start killing processes. The kernel does this, and there's absolutely nothing you can do about it.
Tom
One particular case of the OOM Killer being problematic is running a max-memory multicore machine in a 32-bit OS. The kernel only gets 1GB of memory, and the data structures necessary to manage 15GB of RAM can easily fill that up. The result? vim/screen/bash processes killed off left and right!
Tom
Also, on Linux at least any uncaught exception will cause an abort() with a core dump. This can be used to identify the OOM condition, and is far superior to logging due to the ability to do a complete post-mortem program analysis from the core dump.
Tom
+2  A: 

On a modern OS, your computer will freeze and probably fall over long, long before you actually run out of VM. It's pointless testing for it.

anon
@leeor_net Excellent counter example! Why not make it an answer?
anon
Moreover, at least a few modern OSes (some versions of Linux) overcommit memory and are just as likely to kill your process outright as they are to inform your program of the problem via the usual channels, ie bad_alloc for new and a null pointer for malloc. Yet another reason that these kinds of checks are completely pointless in the age of virtual memory. (Insert usual caveats about embedded platforms here.)
"these kinds of checks are completely pointless" -- oversimplifications are rarely true and this is one of theses cases. If you try to allocate a large continuous region at once, you may fairly well get a bad_alloc even on a modern Linux machine. Happens with my code right now.
ypnos
+1  A: 

In the 99.9% case, all of my C++ apps will simply die on a failed allocation. Once you're out of memory, really there's nothing you can do unless your application is specifically designed to handle and correct out of memory conditions.

The .1% case is for cases where an allocation is being made that is 1) known to be very large and have a siginficant chance of failure and 2) represents a situation where an appropriate fallback is appropriate. This is very rare and it's been years since I tried somethnig like this (I woludn't do it again).

JaredPar
A: 

I wouldn't check at the call site, but I would recommend wrapping most of the program in a catch clause to log and hopefully run down errors (memory allocation failure could indicate a memory leak, a runaway while loop, poor memory management (allocating multiple buffers but only using one at a time), etc.).

Max Lybbert
Better to let the program dump core from the uncaught std::bad_alloc, so you can inspect the entire program state after it exits. It's less "pretty", but far more useful for actually fixing the bug.
Tom
That's a viable solution, too.
Max Lybbert
bad_alloc throws don't always indicate a bug. As an example, I'm developing a map editor. Sometimes with large dimensions a bad_alloc will be thrown. This doesn't indicate a bug, just that the maps dimensions are too large to fit in memory on a users machine. This is hardly a case where I want the program to crash and it's very much recoverable using a simple try/catch block when creating a map with the 'new' operator. If it fails, simple, I deallocate any memory left over and continue merily along.Granted not all cases of bad_alloc's are this straight-forward.
leeor_net
+3  A: 

As per another suggestion, I'm making this an actual answer.

A lot of suggestions so far have basically stated that "bad_alloc == bug in your program". To the contrary, bad_alloc throws don't always indicate a bug.

As an example, I'm developing a map editor for a game project I'm working on. Sometimes with large map dimensions a bad_alloc will be thrown. This doesn't indicate a bug, just that the maps dimensions are too large to fit in available memory on a users machine. This is hardly a case where I want the program to crash and it's very much recoverable using a simple try/catch block when creating a map with the 'new' operator. If it fails, simple, I deallocate any memory left over and continue merily along. Granted not all cases of bad_alloc's are this straight-forward so it's important to really understand why a bad_alloc is being thrown.

leeor_net