views:

832

answers:

4

So I use Qt a lot with my development and love it. The usual design pattern with Qt objects is to allocate them using new.

Pretty much all of the examples (especially code generated by the Qt designer) do absolutely no checking for the std::bad_alloc exception. Since the objects allocated (usually widgets and such) are small this is hardly ever a problem. After all, if you fail to allocate something like 20 bytes, odds are there's not much you can do to remedy the problem.

Currently, I've adopted a policy of wrapping "large" (anything above a page or two in size) allocations in a try/catch. If that fails, I display a message to the user, pretty much anything smaller, I'll just let the app crash with a std::bad_alloc exception.

So, I wonder what the schools of thought on this are on this?

Is it good policy to check each and every new operation? Or only ones I expect to have the potential to fail?

Also, it is clearly a whole different story when dealing with an embedded environment where resources can be much more constrained. I am asking in the context of a desktop application, but would be interested in answers involving other scenarios as well.

+10  A: 

The problem is not "where to catch" but "what to do when an exception is catched".

If you want to check, instead of wrapping with try catch you'd better use

    #include <new>
    x = new (std::nothrow) X();
    if (x == NULL) {
        // allocation failed
    }

My usual practice is

  • in non interactive program, catch at main level an display an adequate error message there.

  • in program having a user interaction loop, I catch also at the loop so that the user can close some things and try to continue.

Exceptionally, there are other places where a catch is meaningful, but its rare.

AProgrammer
indeed nothrow new is a viable option, but if there are a few allocations in a given block of code, it'll be much more verbose than just using a try/catch block. I agree 100% that "what to do" is the real question, since out of memory scenarios really limit how much you can do to fix things.
Evan Teran
Not so much "what to do", but "can you do anything at all"
jalf
If you catch it high enough, some memory may have been freed during stack unwinding. You can make sure something is freed, using a new_handler or tricks like `int flag = false; try { vector<int> v(1000); flag = true; doEventLoop(); } catch(bad_alloc) { if (flag) cout << "I have 4k to use to prompt the user\n"; }`
Steve Jessop
Where "prompt the user" might mean sticking up an out of memory dialog and/or a save prompt. Of course on linux you won't see bad_alloc anyway, you'll just get a page fault. On any OS you might find it grinds to a halt before an app sees out-of-memory, and/or that having memory in your process doesn't help you display dialogs, save files, etc, because the UI and kernel can't allocate memory. But it's probably worth a try just in case it works (perhaps because your failed allocation was of a large object, so there is still free memory).
Steve Jessop
@onebyone, about Linux, are you alluding on the behavior modifiable by adjusting /proc/sys/vm/overcommit* or something else? (That one is a stupid default, and it should be a pre-process configuration and not a system wide one).
AProgrammer
Yes, I'm referring to overcommitting. I don't know of any other specific reason why your linux process would crash just because you run out of memory. As I say, though, on any OS there's a risk that running out of memory will cause *something* to go wrong, even if just other apps. So you can do your best, but you can't count on that actually making the situation recoverable for the user.
Steve Jessop
+4  A: 

I usually catch exceptions at the point where the user has initiated an action. For console application this means in main, for GUI applications I put handlers in places like button on-click handlers and such.

I believe that it makes little sense catching exceptions in the middle of an action, the user usually expects the operation to either succeeds or completely fail.

avakar
+8  A: 

Handle the exception when you can. If an allocation fails, and your application can't continue without that bit of memory, why bother checking for the error?

Handle the error when it can be handled, when there is a meaningful way to recover. If there's nothing you can do about the error, just let it propagate.

jalf
I would like to accept both yours and AProgrammer's, they both seem to be good answers. Since AProgrammer was first, I'm accepting his.
Evan Teran
yup yup. His is more detailed too
jalf
I mean, "Arrrr, curses! You win THIS time, AProgrammer!"
jalf
A: 

Handle it in main() (or the equivalent top level exception handler in Qt)

The reason is that std::bad_alloc either happens when you exhaust the memory space (2 or 3 GB on 32 bits systems, doesn't happen on 64 bits systems) or when you exhaust swap space. Modern heap allocators aren't tuned to run from swap space, so that will be a slow, noisy death - chances are your users will kill your app well beforehand as it's UI is no longer responding. And on Linux, the OS memory handling is so poor by default that your app is likely to be killed automatically.

So, there is little you can do. Confess you have a bug, and see if you can save any work the user may have done. To be able to do so, it's best to abort as much as possible. Yes, this may in fact lose some of the last user input. But it's that very action that likely triggered the OOM situation.. The goal is to save whatever data you can trust.

MSalters