views:

682

answers:

10

I often catch myself doing the following (in non-critical components):

some_small_struct *ptr=(some_small_struct *) malloc(sizeof(some_small_struct));
ptr->some_member= ...;

In words, I allocate dynamically memory for a small structure and I use it directly without checking the malloc'ed pointer. I understand there is always a chance that the program won't get the memory it asks for (duh!) but consider the following:

If the program can't even get some memory for a small structure off the heap, maybe there are much bigger problems looming and it doesn't matter after all.

Furthermore, what if handling the null pointer exacerbates the precarious situation even more?? (e.g. trying to log the condition calls even more non-existing resources etc.)

Is my reasoning sane (enough) ?

Updated:

  1. A "safe_malloc" function can be useful when debugging and might be useful otherwise
  2. +X access can hide the root cause of a NULL pointer
  3. On Linux, "optimistic memory allocation" can shadow loomin OOM (Out-Of-Memory) conditions
+1  A: 

at the very least I would put an assert(ptr != NULL) in there so you get a meaningful error.

Evan Teran
@Evan: but, to my point, wouldn't this trigger a chain of events that couldn't be supported anyways?
jldupont
if the assert fails, the program terminates immediately. what chain of events are you referring to?
Evan Teran
@jldupont: Yes, but think about this: Imagine that the ptr->some_member =... doesn't cause a segfault, but it overwrites something that causes a segfault much later. A very nasty bug is well hidden. The assert will catch it right away.
Richard Pennington
@Evan: the program would end anyways. The point is: if the memory situation of the host is in a dire condition, then using things like `assert` might not even matter.
jldupont
@Richard: but if malloc can't allocate, it returns a NULL pointer, doesn't it?
jldupont
@jldupont: Yes, but event if accessing *NULL faults, which it doesn't always, accessing *(NULL + 4) may not.
Richard Pennington
The assert is meaningless as it will only abort in test-environments while out-of-memory situations are much more common in real situations. The assert will only tell you that it is likely to have an out-of-memory situation or will you ship your product in debug-mode?
stefaanv
In it's book "Object Oriented Software Construction" Bertrand Meyer recommand to leave asserts in shipped products (If there is no performance issue) arguing that they are even more usefull there when program crash. It could not be so bad an idea.
kriss
Still, in that case, I would go for a SafeMalloc() function that does the (assert) check instead of putting asserts all over the place.
stefaanv
@stefaanv: a sensible mitigation plan (re: safemalloc). thanks.
jldupont
A: 

It is possible to allocate a largish chunk of memory at startup that you can free when you hit an out of memory condition and use that to shut down gracefully.

Eclipse
@Eclipse: yes of course (the "constant memory" pattern) but this isn't what the question is about.
jldupont
This won't actually work on Linux, which will happily overcommit memory (see also OOM killer). Your "freeable" block will be a reserved section of address space with no actual memory backing it, so when you free it, things don't get any better. ::sigh::
dmckee
You will have some level of stack memory left (otherwise you would have hit a stack overflow, not a NULL return from malloc). Unless you hit both conditions at the same time, you shouldn't have a problem handling the situation as long as you don't need to allocate more heap space until you can free the reserve memory.
Eclipse
You *could* reserve your safety block and then access each memory page of it to force the memory manager to connect some backing memory. They you get this trick back again.
dmckee
+2  A: 

Furthermore, what if handling the null pointer exacerbates the precarious situation even more??

I do not see why it can exacerbate the situation.
Anyway, when writing code for windows ptr->some_member will throw access violation so you will immediately see the problem, therefore I see no reason to check the return value, unless your program has some opportunity to free the memory. For platforms that do not handle null-pointers in a good way(throwing exception) it is dangerous to ignore such points.

ironic
+1: exactly. Which platforms would not handle null-pointers? (aside from embedded systems probably)
jldupont
even if `ptr` is a null pointer (ie its value lies in a reserved region of the address space), that's not necessarily true for `ptr->foo` or `ptr[42]`
Christoph
@Christoph: excellent point. Maybe the added tweak would be in accessing `+0` first then.
jldupont
I do not think some_small_struct will have a size of 1 Mb (or what is exact value for NULL-pointer assignment memory partition?)
ironic
@ironic: that's platform specific; if you're confident the code will never exhaust the reserved space on all platforms its going to be ported to, feel free to drop checks for `NULL`; meanwhile, I'll continue to accompany each `foo = malloc(sizeof *foo)` with a `if(!foo) [...]` where `[...]` is something like `return NULL`, `abort()`, `log("out of memory")`, ...
Christoph
+6  A: 

I would say No. Using a NULL pointer is going to crash the program (probably).
But detecting it and doing something intelligent will be OK and you may be able to recover from the low memory situation.

If you are doing a big operation set some global error flag and start unwinding the stack and releasing resources. Hopefully one or more of these resources will be your memory hog and your application will get back to normal.

This of course is a C problem and handeled automatically in C++ with the help of exceptions and RAII.
As new will not return NULL there is no point in checking.

Martin York
@Martin: the system (as Neil pointed out also) will probably become unresponsive to the point that any recovery might be a pointless operation. If something is eating memory faster than can be meaningfully mitigated, I see no point.
jldupont
@Martin: hmmm... I am still not convinced. I see spectre of Heisenberg here...
jldupont
C++ manages to do it. If something is eating memory fast, then stop doing something. Release all resources to-do with something get back to a normal state then report the error. With work C can be made to handle errors just like C++. At worst call exit() and shut down cleanly. Generating segfaults because you can be bothered to code correctly is not going to make your customers confident in your ability.
Martin York
@Martin: (1) program X might not be the one eating memory, it could be program Y. (2) your suggestion to call exit() is acknowledged (3) back to my initial point: if my program (say X) can't even grab some 16bytes to manipulate a structure, I am pretty sure the error log of the host will anyhow get filled my other nasty things. The customer in question will get its hands full with other concerns and the `segfault` you are referring to might not be his biggest concern.
jldupont
@jldupont. 1) If we are in an embeded platform the X and Y may interact. But you definately need to check there. If we are on Windows the usage of memory by X will not affect the memory available to Y.
Martin York
+9  A: 

In the case of C, it depends on the platform. If you are on an embedded platform with very little memory, you should alweays check, thouggh what you do if it does fail is more difficult to say. On a modern 32-bit OS with virtual memory, the system will probably become unresponsive and crash before it admits to running out of memory. In this case, the call to malloc never returns, so the utility of checking its value becomes moot.

In the case of C++, you should be using new instead of malloc, in which case an exception will be raised on exhaustion, so there is no point in checking the return value.

anon
But in the C case you should always check. Unresponsive and recoverable is better than a crash.
Martin York
+1: my thoughts exactly. thanks.
jldupont
@Martin: I am a disciple of the "fail fast" doctrine.
jldupont
I disagree. Handling a malloc failure can be close to impossible (aside from callimg exit()). But then I don't do any C programmimg any more.
anon
@jldupont: check for NULL and call exit() then. Your onexit() handlers may be able to do something.
Martin York
@Martin: like what? I am truly curious as I have no idea what exit() might provide as value to me. Please explain.
jldupont
@Neil: Close to impossible is just an excuse to be lazy and not look at the problem. If there is nothing to be done call exit() and a least try for a clean(ish) shutdown. In a windows context canceling the current operation and releasing the resources should put the application back into a standard state in the main event loop.
Martin York
@jldpoint: That is imposable to say in the general case. But a cleanly shut down program is always better than one that dies from segfault. Trying to sell a program that generates segfaults into the system log files is not going to make customers confident in your ability!
Martin York
@Martin At least on UNIX like operating systems, a memory access violation may well be better than exiting cleanly, because then you will get a core dump.
anon
@Martin: how about trapping the segfault event and exiting in silence?
jldupont
@Martin: ... what if the memory condition is to the point where a small structure can't even be allocated, what does it say about trying to log an event??
jldupont
@Neil: A core is only going to tell you that for forgot to check for a NULL pointer :-) We already know that it ran out of memory because the hard disk just spent the last 10 minutes grinding through all the virtual pages that got paged out. In a windows context recovering from low memory may not be perfect but it can be done with only a little work (assuming good programming techniques that allow the you to unwind the stack without further processing all the way back to the event loop (or where this thread took up the task)).
Martin York
@jldupont: If you manage to unwind the stack and release resources. There should be no problem reporting the error. One would assume that the event loggers have been written to cope with this (there are techniques to handle it).
Martin York
@Mattin The core will also tell you WHERE you ran out of memory, assuming you use the pointer immediately after malloc'ing it, as the OP asked about.
anon
"On a modern 32-bit OS with virtual memory, the system will probably become unresponsive and crash before it admits to running out of memory. In this case, the call to malloc never returns, so the utility of checking its value becomes moot."This is not true at all, the malloc can fail just because there's not enough (continuous) memory in the address space, and this does not imply that the system ran out of memory: actually the virtual address space on a 32 bit OS is up to 4 GB wide, while the physical memory can be much more, not counting the swap space.So, check that return value.
Matteo Italia
+1  A: 

Yes, having insufficient memeory will almost certatinly presage other failures coming soon. But how sure are you that no corrupt output will occur between the failure to allocate and the final crash?

How sure are you for every program, every time you make an edit.

Catch your errors so you can know you crashed on time.

dmckee
+1: for addressing the incertitude. This strategy coupled with `safemalloc` might be a good combo. thanks.
jldupont
+10  A: 

Depends on the platform. For instance, on Linux (by default) it does not make much sense to check for NULL:

http://linux.die.net/man/3/malloc

By default, Linux follows an optimistic memory allocation strategy. This means that when malloc() returns non-NULL there is no guarantee that the memory really is available. This is a really bad bug. In case it turns out that the system is out of memory, one or more processes will be killed by the infamous OOM killer.

Nemanja Trifunovic
At least .. +1 !!
Ben
Malloc can still return NULL when the address space is full.
R Samuel Klatchko
+1: comes back to my point.
jldupont
A: 

I always feel it is important and best to handle the return of malloc or any other system call for that matter. Though in modern systems (apart from embedded ones) it's a rare scenario unless and until your code uses too much memory, it's always safer.

Continuing the code after a system call failure can lead to corruption, crash and what not apart from making your program look bad.

Also, in linux, memory allocated to a process is limited. Try creating 1000 threads in a process and allocate some memory in each one of them, then you can easily simulate the low memory condition. : )

Always better to check for sys call return values!

Jay
+1  A: 

Assuming that you are running on a Linux/MaxOs/Windows or other virtual memory system, then... the only reason to check the return value from malloc is if you have a strategy for freeing up enough memory to allow the program to continue running. An informative message will help in diagnosing the problem, but only if your program caused the out-of-memory situation. Usually it is not your program and the only thing that your program can to do help is to exit as quickly as possible.

assert(ptr != NULL);

will do all of these things. My usual strategy is to have a layer around malloc that has this in it.

void *my_malloc(size_t size)
{
    void *ptr = malloc ( size );
    assert(ptr != NULL);
    return *ptr;
}

Then you call my_malloc instead of malloc. During development I use a memory allocation library that is conducive to debugging. After that if it runs out of memory - I get a message.

Richard Thurvald
+2  A: 

Allocations can fail for several reasons. What you do (and can do) about it depends in part on the allocation failure.

Being truly out of memory is catastrophic. Unless you've made a careful plan for this, there's probably nothing you can do. (For example, you could have pre-allocated all the resources you'd need for an emergency save and shutdown.)

But many allocation failures have nothing to do with being out of memory. Fragmentation can cause an allocation to fail because there's not enough contiguous space available even though there's plenty of memory free. The question specifically said a "small structure", so this is probably as bad as true out-of-memory condition. (But code is ever-changing. What's a small structure today might be a monster tomorrow. And if it's so small, do you really need memory from the heap or can you get it from the stack?)

In a multi-threaded world, allocation failures are often transient conditions. Your modest allocation might fail this microsecond, but perhaps a memory-hogging thread is about to release a big buffer. So a recovery strategy might involve a delay and retry.

How (and if) you handle allocation failure can also depend on the type of application. If you're writing a complex document editor, and a crash means losing user's work, then it's worth expending more effort to handle these failures. If your application is transactional, and each change is incrementally applied to persistent storage, then a crash is only a minor inconvenience to the user. Even so, logging should be considered. If you're application is routinely getting allocation failures, you probably have a bug, and you'll need the logs to know about it and track it down.

Lastly, you have to think about testing. Allocation failures are rare, so the chance that recovery code has been exercised in your testing is vanishingly small--unless you've taken steps to ensure test coverage by artificially forcing failures. If you aren't going to test your recovery code, then it's probably not worth writing it.

Adrian McCarthy
+1: thanks for your well articulated thoughts.
jldupont