views:

2221

answers:

3

I know that according to C++ standard in case the new fails to allocate memory it is supposed to throw std::bad_alloc exception. But I have heard that some compilers such as VC6 (or CRT implementation?) do not adhere to it. Is this true ? I am asking this because checking for NULL after each and every new statement makes code look very ugly.

+4  A: 

Based on the C++ spec, it will always throw std::bad_alloc when you use just plain new with no params, but of course there can be some non compliant compilers.

I would not code to be compliant with non c++ compliant compilers though. VC6 being one of them in this respect.

It is good practice though to always set your pointer to NULL after you delete them. So because of that, checking for NULL is still needed.

That being said, here are a couple options to cleaning up your code:

Option 1: Setting your own new handler

A safe way to clean up your code would be to call: set_new_handler first.

Then you could check for NULL in your handler and throw std::bad_alloc there if NULL is returned.

If you like exceptions better, then this is your best bet. If you like to return NULL better then you can also do that by doing a catch inside your new handler.

Option 2: Using overloaded new

The c++ standard header file defines a struct nothrow which is empty. You can use an object of this struct inside new to get its overloaded version that always returns NULL.

void* operator new (size_t size, const std::nothrow_t &);
void* operator new[] (void *v, const std::nothrow_t &nt);

So in your code:

 char *p = new(std::nothrow) char[1024];

Here is a good refrence for further reading

Brian R. Bondy
I understand setting of NULL after delete. But my problem is code like this:int *p = new int;if( p == NULL){// log about memory allocation failure..return;}
Naveen
gotcha, updated.
Brian R. Bondy
You can throw bad_alloc in your new handler, but there's nothing to even check for NULL. You also cannot modify the return value of new through the handler.
Roger Pate
Setting pointers to NULL after delete may be a good idea (for C). BUT in C++ it is a code smell that indicates that RAII has not been used correctly. I would consider that advice outdated.
Martin York
@Martin: No. Just... no. Try to find out the state of your program in a debugger, and NULLed pointers are your friend.
I'm not saying it is a bad thing. Just that it is a code smell. If you have a pointer that could potentially be used after deletion there are bigger design issues to worry about. Setting RAW pointers to NULL is a warning sign; Ask why is this pointer still available for abuse!
Martin York
not everything fits into raii, I agree most cases you can restructure, but I think it's a little wrong to say that you will never need to manually allocate/delete a pointer (and set it to NULL).
Brian R. Bondy
+20  A: 

VC6 was non-compliant by default in this regard. VC6's new returned 0 (or NULL).

Here's Microsoft's KB Article on this issue along with their suggested workaround using a custom new handler:

If you have old code that was written for VC6 behavior, you can get that same behavior with newer MSVC compilers (something like 7.0 and later) by linking in a object file named nothrownew.obj. There's actually a fairly complicated set of rules in the 7.0 and 7.1 compilers (VS2002 and VS2003) to determine whether they defaulted to non-throwing or throwing new: http://msdn.microsoft.com/en-us/library/kftdy56f(VS.71).aspx

It seems that MS cleaned this up in 8.0 (VS2005): http://msdn.microsoft.com/en-us/library/kftdy56f.aspx - now it always defaults to a throwing new unless you specifically link to nothrownew.obj.

Note that you can specify that you want new to return 0 instead of throwing std::bad_alloc using the std::nothrow parameter:

SomeType *p = new(std::nothrow) SomeType;

This appears to work in VC6, so it could be a way to more or less mechanically fix the code to work the same with all compilers so you don't have to rework existing error handling.

Michael Burr
Wrong version numbers. It was broken in 5.0 (as the article you link to says). It was fixed in 6.0.
Daniel Earwicker
VC6 returns NULL by default as well - I just tested it. According to the "kftdy56f" links, the behavior in VC7 and VC7.1 (VS2002 and VS2003) could return NULL as well depending on whether libc*.lib or libcp*.lib (the CRT or the C++ standard library) was linked in. I have no interest in testing that.
Michael Burr
To be fair, VC6 was released before the C++ standard was ratified, which is one reason why it was so non-conforming. It's true that the standard was nearly finished at the time, but one has to remember that there are development cycles and VC6 was probably started at least a year earlier.
Mystere Man
+7  A: 

I'd like to add the (somewhat controversial) opinion that checking for NULL after an allocation attempt is pretty much an exercise in futility. If your program ever runs into that situation, chances are you can't do much more than exiting fast. It's very likely that any subsequent allocation attempt will also fail.

Without checking for NULL, your subsequent code would attempt to dereference a NULL pointer, which tends to exit the program fast, with a relatively unique (and easily debuggable) exit condition.

I'm not trying to talk you out of checking for NULL, it's certainly conscientious programming. But you don't gain much from it, unless in very specific cases where you can perhaps store some recovery information (without allocating more memory), or free less important memory, etc. But those cases will be relatively rare for most people.

Given this, I'd just trust the compiler to throw bad_alloc, personally - at least in most cases.

"Code Complete" suggests to pre-allocate a "safety net" of memory that can be used when running into out-of-memory situations, to make it possible to save debug information before exiting, for example.
Stefan Rådström
The problem is that on a modern VM system if you come anywhere _near_ running out of (virtual) memory the thing will be paging so much it will be totally unusable.
anon
Stefan, zabzonk: couldn't agree more.
zabzonk, not true. Many 32 bit apps exhaust their address space, and generally memory needs not be commited to be exhausted. Also, you need *at least* diagnostics in an out of memory situation.
peterchen
There are also situations where your OS will let you allocate the memory without really mapping new pages in (lazy evaluation). But when you go to try and use that memory, there's nothing available and process gets killed. Less of a problem with cheap harddrives and large swapfiles...
Mr.Ree
The code could also be buggy enough to try to allocate an unreasonably big chunk of memory (like a gig or so) and new is likely to fail in such case, but there's no memory depletion in this case an it's safe to handle the error and continue.
sharptooth
I beg to differ; sometimes not being able to allocate memory is NOT terminal and crashing is not desirable. Processing every piece of data may not be required, but alerting the operator is important if some is skipped. Not everyone has a memory managed environment with disk-backing either.
Adam Hawes
@sharptooth, @Adam Hawes: You're discussing situations in which allocating memory is optional - if you can, you'll do something with it. Of course you need to check for NULL then. In most cases, memory is essential, so a failing allocation means failure overall, though.
I think the point is that if you run out of memory, you probably don't have enough memory to do anything about it either.
Mystere Man