views:

1023

answers:

16

Can the new operator throw an exception in real life?

And if so, do I have any options for handling such an exception apart from killing my application?

Update:

Do any real-world, new-heavy applications check for failure and recover when there is no memory?


See also:

+17  A: 

Yes, new can and will throw if allocation fails. This can happen if you run out of memory or you try to allocate a block of memory too large.

You can catch the std::bad_alloc exception and handle it appropriately. Sometimes this makes sense, other times (read: most of the time) it doesn't. If, for example, you were trying to allocate a huge buffer but could work with less space, you could try allocating successively smaller blocks.

James McNellis
Is this situation REAL? Can I meet it in a real life? Even fclose() can fail, but NOONE is checking its return code. (it will fail on disconected nfs and will not save the information)
osgx
When you write C, do you check to see if malloc returns NULL? If not, I doubt I can convince you to watch for exceptions from new.
ojrac
You should catch `std::bad_alloc` anywhere that you can reasonably recover from it. In most cases, there's not a whole lot you can do, and so your best bet might be to catch it in `main` and at least give the user a nice friendly error message or log the failure (many experts, including Herb Sutter, agree with this: http://www.gotw.ca/publications/mill16.htm).
James McNellis
@ojrac, in C I have a wrapper function of even a macro Malloc, which will test EVERY result of malloc to be not-NULL. And if it is NULL, I have a single point of failure. I will do `{perror("my programme");exit(-42);}`
osgx
@osgx: In GNU programs, the convention is to call that (`malloc` successfully or die) function `xmalloc`.
Chris Jester-Young
I've hit std::bad_alloc before. Yes it's real.
kibibu
@kibibu, Thanks! Was it a huge `new()` or rather small and typical one? How much memory was allocated before hitting `bad_alloc`?
osgx
I think the specifics of the behavior depend entirely on what OS and environment you're executing in.
Jeremy Friesner
@osgx, it was one of several hundred thousand small and typical ones. Part of an acoustic pathtracer that traced first and gathered later. I ran out of memory on my machine (which has a paltry 512 Mb) - but I can't remember whether it exhausted virtual memory or just physical.
kibibu
@kibibu: the OS typically won't tell you whether it ran out of physical memory; mostly because you can't ask for it anyway.
MSalters
Sometimes in debug builds it's useful to not even catch it in main - then (at least in gcc) you can at least get a core file that may or may not have useful information.
Mark B
OMG! You don't check the return value of fclose()?!
NTDLS
@osgx you can achieve the same thing with new with the std::set_new_handler function if you don't like the exception throwing behavior. `void new_handler() { perror("my programme"); exit(-42); }; std::set_new_handler(new_handler);`.
Logan Capaldo
I've done some embedded systems development and I can safely say that if you DON'T check the success of new/malloc operations, some user somewhere is going to find a way to fill the memory of your device crashing your application if the code lacks proper checking. Not checking the return of functions is BAD BAD practice.
karlphillip
A: 

new operator will throw std::bad_alloc exception when you run out of the memory ( virtual memory to be precise).

If new throws an exception then it is a serious error:

  • More than available VM is getting allocated ( it fails eventually). You can try reducing the amount of memory than exiting the program by catching std::bad_alloc exception.
aJ
+1  A: 

The new operator, and new[] operator should throw std::bad_alloc, but this is not always the case as the behavior can be sometimes overridden.

One can use std::set_new_handler and suddenly something entirely different can happen than throwing std::bad_alloc. Although the standard requires that the user either make memory available, abort, or throw std::bad_alloc. But of course this may not be the case.

Disclaimer: I am not suggesting to do this.

Brian R. Bondy
This is what I call bad practice.
George Edison
`std::set_new_handler` in `<new>` is standard C++, §18.4.2.2-3. It's a perfectly reasonable thing to use if you have, for instance, some kind of garbage collection you can do, or you want to log the error. It's not a bad idea to exit the new_handler by `throw bad_alloc`.
Potatoswatter
also - the standard requires that the user's new handler either make memory available, abort, or throw bad_alloc.
Potatoswatter
@Potatoswatter: Cool thanks for the info, updated the answer.
Brian R. Bondy
+5  A: 

In Unix systems, it's customary to run long-running processes with memory limits (using ulimit) so that it doesn't eat up all of a system's memory. If your program hits that limit, you will get std::bad_alloc.


Update for OP's edit: the most typical case of programs recovering from an out-of-memory condition is in garbage-collected systems, which then performs a GC and continues. Though, this sort of on-demand GC is really for last-ditch efforts only; usually, good programs try to GC periodically to reduce stress on the collector.

It's less usual for non-GC programs to recover from out-of-memory issues, but for Internet-facing servers, one way to recover is to simply reject the request that's causing the memory to run out with a "temporary" error. ("First in, first served" strategy.)

Chris Jester-Young
Also GUI applications. If a user action causes memory exahustion. Then abandon the current action but not the whole application.
Martin York
A: 

new operator will throw std::bad_alloc exception when there are not enough available memory in the pool to fulfill runtime request.

This can happen on bad design or when memory allocated are not freed correctly.

Handling of such exception is based on your design, one way will be pause and retry some time later, hoping more memory returned to the pool and the request may succeed.

YeenFei
It can also happen if your dataset just happens to be large. It'd not be unreasonable for a program to not be able to handle a request if you try to pipe in a 20GB file to stdin, for example, in some cases.
Billy ONeal
piping of 20GB via stdin is a not very hard situation. I had done a lot of greps with such sizes :)
osgx
+2  A: 

It depends on the compiler/runtime and on the operator new that you are using (e.g. certain versions of Visual Studio will not throw out of the box, but would rather return a NULL pointer a la malloc instead.)

You can always catch a std::bad_alloc exception, or explicitly use nothrow new to return NULL instead of throwing. (Also see past StackOverflow posts revolving around the subject.)

Note that operator new, like malloc, will fail when you have run out of memory, out of address space (e.g. 2-3GB in a 32-bit process depending on the OS), out of quota (ulimit was already mentioned) or out of contiguous address space (e.g. fragmented heap.)

vladr
When it will fail, what can I do??
osgx
Thanks for pointing me a bug in MSVS!
osgx
@osgx: The "bug" was present on Visual C++ 6 (VS98), and on Visual C++ 2003 (but you could set a compiler option to have new behave like the standard wanted it to). It was less a bug than a non-compliant behaviour existing for backward compatibility purposes.
paercebal
@Vlad I imagine it still behave this way if you compile your code without exception support
Alexandre Jasmin
A: 

Yes, new can throw std::bad_alloc (a subclass of std::exception), which you may catch.

If you absolutely want to avoid this exception, and instead are ready to test the result of new for a null pointer, you may add a nothrow argument:

T* p = new (nothrow) T(...);
if (p == 0)
{
    // Do something about the bad allocation!
}
else
{
    // Here you may use p.
}
squelart
I see a lot of code that mistakenly assumes new (without arguments) returns NULL on failure.
George Edison
So must I to check this NULL's every time when I use new?
osgx
@osgx: Only if you use the nothrow option. Did you even *read* squelart's answer?
Billy ONeal
Yes. I need either to check `NULL` when using `nothrow`, either to be able to catch `bad_alloc` in any place where I use new?There are thousands of such places in big program and it can be very hard.
osgx
then only keep one `catch` at the end of `main()` (and at the end of each thread method if you are multi-threaded) and display a big error message "out of memory" before exiting. :)
vladr
There are few legitimate uses of nothrow new. Two that come to mind are when working with legacy code (that assumes new returns null on failure) or when exceptions are prohibited (e.g. in an embedded system).
James McNellis
+6  A: 

You don't need to handle the exception in every single new :) Exceptions can propagate. Design your code so that there are certain points in each "module" where that error is handled.

moogs
+1  A: 

I use Mac OS X, and I've never seen malloc return NULL (which would imply an exception from new in C++). The machine bogs down, does its best to allocate dwindling memory to processes, and finally sends SIGSTOP and invites the user to kill processes rather than have them deal with allocation failure.

However, that's just one platform. CERTAINLY there are platforms where the default allocator does throw. And, as Chris says, ulimit may introduce an artificial constraint so that an exception would be the expected behavior.

Also, there are allocators besides the default one/malloc. If a class overrides operator new, you use custom arguments to new(…), or you pass an allocator object into a container, it probably defines its own conditions to throw bad_alloc.

Potatoswatter
linux without ulimit on virt memory, as far as i know, will allow to allocate memory with overcommiting on mallocs/new (mmaps/sbrk internally). But when I'll try to use it, process (sometimes it can be a random process) will be killed by Out-Of-Memory killer without any chance of recovery, or dumping/saving state.
osgx
@osgx: Unfortunately, there's no better way to deal with overcommitment. It's more or less *defined* as suppressing allocation errors, as a feature. Did you try installing a signal handler for that out-of-memory condition?
Potatoswatter
I checked Google and it looks like `/proc/sys/vm/overcommit_memory` might help you turn off overcommitment, if that's what you want.
Potatoswatter
@Potatoswatter: Yes and you will then get alloc failure exceptions. I ran my old Linux laptop that way for a while and it behaves mostly like having the OOM Killer because not many applications handle it.
Zan Lynx
+2  A: 

Yes new will throw an exception if there is no more memory available, but that doesn't mean you should wrap every new in a try ... catch. Only catch the exception if your program can actually do something about it.

If the program cannot do anything to handle that exceptional situation, what is often the case if you run out of memory, there is no use in catching the exception. If the only thing you could reasonably do is to abort the program you can as well just let the exception bubble up to top level, where it will terminate the program as well.

sth
A: 

Most realistically new will throw due to a decision to limit a resource. Say this class (which may be memory intensive) takes memory out of the physicals pool and if to many objects take from it (we need memory for other things like sound, textures etc) it may throw instead of crashing later on when something that should be able to allocate memory takes it. (looks like a weird side effect).

Overloading new can be useful in devices with restricted memory. Such as handhelds or on consoles when its too easy to go overboard with cool effects.

acidzombie24
+2  A: 

osgx said:

Does any real-world applications checks a lot number of news and can recover when there is no memory?

I have answered this previously in my answer to this question, which is quoted below:

It is very difficult to handle this sort of situation. You may want to return a meaningful error to the user of your application, but if it's a problem caused by lack of memory, you may not even be able to afford the memory to allocate the error message. It's a bit of a catch-22 situation really.

There is a defensive programming technique (sometimes called a memory parachute or rainy day fund) where you allocate a chunk of memory when your application starts. When you then handle the bad_alloc exception, you free this memory up, and use the available memory to close down the application gracefully, including displaying a meaningful error to the user. This is much better than crashing :)

LeopardSkinPillBoxHat
+1 for catch 22 :)
osgx
+2  A: 

In many cases there's no reasonable recovery for an out of memory situation, in which case it's probably perfectly reasonable to let the application terminate. You might want to catch the exception at a high level to display a nicer error message than the compiler might give by default, but you might have to play some tricks to get even that to work (since the process is likely to be very low on resources at that point).

Unless you have a special situation that can be handled and recovered, there's probably no reason to spend a lot of effort trying to handle the exception.

Michael Burr
A: 

Yes, new can and will throw.

Since you are asking about 'real' programs: I've worked on various shrink-wrapped commercial software applications for over 20 years. 'Real' programs with millions of users. That you can go and buy off the shelf today. Yes, new can throw.

There are various ways to handle this.

First, write your own new_handler (this is called before new gives up and throws - see set_new_handler() function). When your new_handler is called, see if you can free some things you don't really need. Also warn the user that they are running low on memory. (yes, it can be hard to warn the user about anything if you are really low).

One thing is to have pre-allocated, at the start of your program some 'extra' memory. When you run out of memory, use this extra memory to help save a copy of the user's document to disk. Then warn, and maybe exit gracefully.

Etc. This is just a overview, obviously there is more to it.

Handling low memory is not easy.

tony
+1  A: 

If you are running on a typical embedded processor running Linux without virtual memory it is quite likely your process will be terminated by the operating system before new fails if you allocate too much memory.

If you are running your program on a machine with less physical memory than the maximum of virtual memory (2 GB on standard Windows) you will find that once you have allocated an amount of memory approximately equal to the available physical memory, further allocations will succeed but will cause paging to disk. This will bog your program down and you might not actually be able to get to the point of exhausting virtual memory. So you might not get an exception thrown.

If you have more physical memory than the virtual memory, and you simply keep allocating memory, you will get an exception when you have exhausted virtual memory to the point where you can not allocate the block size you are requesting.

If you have a long-running program that allocates and frees in many different block sizes, including small blocks, with a wide variety of lifetimes, the virtual memory may become fragmented to the point where new will be unable to find a large enough block to satisfy a request. Then new will throw an exception. If you happen to have a memory leak that leaks the occasional small block in a random location that will eventually fragment memory to the point where an arbitrarily small block allocation will fail, and an exception will be thrown.

If you have a program error that accidentally passes a huge array size to new[], new will fail and throw an exception. This can happen for example if the array size is is actually some sort of random byte pattern, perhaps derived from uninitialized memory or a corrupted communication stream.

All the above is for the default global new. Howver you can replace global new and you can provide class-specific new. These too can throw, and the meaning of that situation depends on how you programmed it. it is usual for new to include a loop that attempts all possible avenues for getting the requested memory. It throws when all those are exhausted. What you do then is up to you.

You can catch an exception from new and use the opportunity it provides to document the program state around the time of the exception. You can "dump core". If you have a circular instrumentation buffer allocated at program startup, you can dump it to disk before you teminate the program. The program termination can be graceful, which is an advantage over simply not handling the exception.

I have not personally seen an example where additional memory could be obtained after the exception. One possibility however is the following. Suppose you have a memory allocator that is highly efficient but not good at reclaiming free space. For example, it might be prone to free space fragmentation, in which free blocks are adjacent but not coalesced. You could use an exception from new, caught in a new_handler, to run a compaction procedure for free space before retrying.

Serious programs should treat memory as a potentially scarce resource, control its allocation as much as possible, monitor its availability and react appropriately if something seems to have gone dramatically wrong. For example, you could make a case that in any real program there is quite a small upper bound on the size parameer passed to the memory allocator, and anything larger than this should cause some kind of error handling, whether or not the request can be satisfied. You could argue that the rate of memory increase of a long-running program should be monitored, and if it can be reasonably predicted that the program will exhaust available memory in the near future, an orderly restart of the process should be begun.

Permaquid
tl;dr (Sorry, had to say it. :-P)
Chris Jester-Young
+1  A: 

Note that in Windows, very large new/mallocs will just allocate from virtual memory. In practice, your machine will crash before you see that exception.

char *pCrashMyMachine = new char[TWO_GIGABYTES];

Try it if you dare!

Erik Hermansen
how can I specify 50 terabytes?Can application handle this situation?What versions of windows will crash?
osgx
You made me fire up the compiler! My mistake--50 terabytes wouldn't work above. The value is limited to 2^31, about 2 gigs. So try the experiment on a machine with less than 2 gigs of disk space left. I originally ran this on Windows XP. Don't know about other versions of O/S and MSVC runtimes, and it is a really annoying experiment to run.
Erik Hermansen