views:

1624

answers:

18

In my company there is a coding rule that says, after freeing any memory, reset the variable to NULL. For example ...

void some_func () 
{
    int *nPtr;

    nPtr = malloc (100);

    free (nPtr);
    nPtr = NULL;

    return;
}

I feel that, in cases like the code shown above, setting to NULL does not have any meaning. Or am I missing something?

If there is no meaning in such cases, I am going to take it up with the "quality team" to remove this coding rule. Please advice.

+2  A: 

The idea behind this, is to stop accidental reuse of the freed pointer.

Mitch Wheat
+4  A: 

Setting the pointer to the free'd memory means that any attempt to access that memory through the pointer will immediately crash, instead of causing undefined behavior. It makes it much easier to determine where things went wrong.

I can see your argument: since nPtr is going out of scope right after nPtr = NULL, there doesn't seem to be a reason to set it to NULL. However, in the case of a struct member or somewhere else where the pointer is not immediately going out of scope, it makes more sense. It's not immediately apparent whether or not that pointer will be used again by code that shouldn't be using it.

It's likely the rule is stated without making a distinction between these two cases, because it's much more difficult to automatically enforce the rule, let alone for the developers to follow it. It doesn't hurt to set pointers to NULL after every free, but it has the potential of pointing out big problems.

Jared Oberhaus
+55  A: 

Setting unused pointers to NULL is a defensive style, protecting against dangling pointer bugs. If a dangling pointer is accessed after it is freed, you may read or overwrite random memory. If a null pointer is accessed, you get an immediate crash on most systems, telling you right away what the error is.

For local variables, it may be a little bit pointless if it is "obvious" that the pointer isn't accessed anymore after being freed, so this style is more appropriate for member data and global variables. Even for local variables, it may be a good approach if the function continues after the memory is released.

To complete the style, you should also initialize pointers to NULL before they get assigned a true pointer value.

Martin v. Löwis
+1. nice answer
Mitch Wheat
I don't understand why you would "initialize pointers to NULL before they get assigned a true pointer value"?
Paul Biggar
@Paul: In the specific case, the declaration could read `int *nPtr=NULL;`. Now, I would agree that this would be redundant, with a malloc following right in the next line. However, if there is code between the declaration and the first initialization, somebody might start using the variable even though it has no value yet. If you null-initialize, you get the segfault; without, you might again read or write random memory. Likewise, if the variable later gets initialized only conditionally, later faulty accesses should give you instant crashes if you remembered to null-initialize.
Martin v. Löwis
@Martin v. Löwis: Ah, you're assuming old-school C where the declarations are at the top of the function? Fair enough then.
Paul Biggar
+1,I learn a new thing.
nthrgeek
+1, good answer!
Dacav
+3  A: 

This (can) actually be important. Although you free the memory, a later part of the program could allocate something new that happens to land in the space. Your old pointer would now point to a valid chunk of memory. It is then possible that someone would use the pointer, resulting in invalid program state.

If you NULL out the pointer, then any attempt to use it is going to dereference 0x0 and crash right there, which is easy to debug. Random pointers pointing to random memory is hard to debug. It's obviously not necessary but then that's why it's in a best practices document.

Steven Canfield
On Windows, at least, debug builds will set the memory to 0xdddddddd so when you use a pointer to deleted memory you know right away. There should be similar mechanisms on all platforms.
jeffamaphone
jeffamaphone, deleted memory block might have get reallocated and assigned to *another* object by the time you use the pointer again.
Constantin
+7  A: 

This is considered good practice to avoid overwriting memory. In the above function, it is unnecessary, but oftentimes when it is done it can find application errors.

Try something like this instead:

#if DEBUG_VERSION
void myfree(void **ptr)
{
    free(*ptr);
    *ptr = null;
}
#else
#define myfree(p) { void ** __p = (p); free(*(__p)); *(__p) = null; }
#endif

The DEBUG_VERSION lets you profile frees in debugging code, but both are functionally the same.

razzed
Or use a macro - no need for pointer-to-pointer that way.
Tal Pressman
Or, use both (edited above)
razzed
The macro version has a subtle bug if you use it after an if statement without brackets.
Mark Ransom
Fixed with { ... } (void) 0 - old trick I remember now.
razzed
What's with the (void) 0? This code do: if (x) myfree( else do_foo(); becomes if (x) { free(*( *( } void 0; else do_foo(); The else is an error.
jmucchiello
That macro is a perfect place for the comma operator: free(*(p)), *(p) = null. Of course the next problem is it evaluates *(p) twice. It should be { void** _pp = (p); free(*_pp); *_pp = null; } Isn't the preprocessor fun.
jmucchiello
Thanks @jmucchiello!
razzed
The macro shouldn't be in bare brackets, it should be in a `do { } while(0)` block so that `if(x) myfree(x); else dostuff();` doesn't break.
Chris Lutz
+7  A: 

If you reach pointer that has been free()d, it might break or not. That memory might be reallocated to another part of your program and then you get memory corruption,

If you set the pointer to NULL, then if you access it, the program always crashes with a segfault. No more ,,sometimes it works'', no more ,,crashes in unpredictible way''. It's way easier to debug.

Tadeusz A. Kadłubowski
The program does not always crash with a segfault. If the way you access the pointer means that a large enough offset is applied to it before dereferencing, then it may reach addressable memory: ((MyHugeStruct *)0)->fieldNearTheEnd. And that's even before you deal with hardware that doesn't segfault on 0 access at all. However, the program is more likely to crash with a segfault.
Steve Jessop
+2  A: 

This rule is useful when you're trying to avoid the following scenarios:

1) You have a really long function with complicated logic and memory management and you don't want to accidentally reuse the pointer to deleted memory later in the function.

2) The pointer is a member variable of a class that has fairly complex behavior and you don't want to accidentally reuse the pointer to deleted memory in other functions.

In your scenario, it doesn't make a whole lot of sense, but if the function were to get longer, it might matter.

You may argue that setting it to NULL may actually mask logic errors later on, or in the case where you assume it is valid, you still crash on NULL, so it doesn't matter.

In general, I would advise you to set it to NULL when you think it is a good idea, and not bother when you think it isn't worth it. Focus instead on writing short functions and well designed classes.

jeffamaphone
+1  A: 

To add to what other have said, one good method of pointer usage is to always check whether it is a valid pointer or not. Something like:


if(ptr)
   ptr->CallSomeMethod();

Explicitly marking the pointer as NULL after freeing it allows for this kind of usage in C/C++.

Aamir
In many cases, where a NULL pointer makes no sense, it would be preferable to write an assertion instead.
ammoQ
+2  A: 

This might be more an argument to initialize all pointers to NULL, but something like this can be a very sneaky bug:

void other_func() {
  int *p; // forgot to initialize
  // some unrelated mallocs and stuff
  // ...
  if (p) {
    *p = 1; // hm...
  }
}

void caller() {
  some_func();
  other_func();
}

p ends up in the same place on the stack as the former nPtr, so it might still contain a seemingly valid pointer. Assigning to *p might overwrite all kinds of unrelated things and lead to ugly bugs. Especially if the compiler initializes local variables with zero in debug mode but doesn't once optimizations are turned on. So the debug builds don't show any signs of the bug while release builds blow up randomly...

sth
A: 

The point is (I think), if one always (even when it actually seems unneeded) do this all code will be safe(ish). But if one skip this on places where it is not needed, it's easier to forget this where it actually is needed.

Joakim Elofsson
+8  A: 

the most common bug in c is the double free. Basically you do something like that

free(foobar);
/* lot of code */
free(foobar);

and it end up pretty bad, the OS try to free some already freed memory and generally it segfault. So the good practice is to set to NULL, so you can make test and check if you really need to free this memory

if(foobar != null){
  free(foobar);
}

also to be noted that free(NULL) won't do anything so you don't have to write the if statement. I am not really an OS guru but I am pretty even now most OSes would crash on double free.

That's also a main reason why all languages with garbage collection (Java, dotnet) was so proud of not having this problem and also not having to leave to developers the memory management as a whole.

RageZ
You can actually just call free() without checking - free(NULL) is defined as doing nothing.
Amber
Doesn't that hide bugs? (Like freeing too much.)
Georg
if i pre-check whether the pointer has been freed already, like `if(p!=null){.....}`, is it unnecessary to set it NULL?
Macroideal
@Macroideal: No, you set it to `NULL` so that you _can_ pre-check it. (or just call `free()` again.)
Georg
@Macroideal. Yes, it is. free() doesn't change the pointer value. So the check will evaluate to "true" and free() will be called again for an address of already freed block.
sharptooth
@Macroideal: no it doesn't change a thing, you have better to set it to NULL anyway just to be safe.
RageZ
@gs: Yup, it's a horrid practice. Double free is a logical error; you don't want logical errors to be hidden, you want them to blow up noisily and immediately.
DrPizza
thanx, I got it. i tried : `p = (char *)malloc(.....); free(p); if(p!=null) //p!=null is true, p is not null although freed { free(p); //Note: checking doesnot prevent error here } `
Macroideal
Clarifying on @sharptooth: `free()` _can't_ change the pointer value. To do so it would have to be `free(void **ptr)` or be a macro `#define free(x) do { __real_free(x); x = NULL; } while(0)` to do that.
Chris Lutz
@Chris Lutz so, what the is inner details of the `free()`, what changes happened after we called the `free()`, kinda puzzled
Macroideal
The C runtime has a memory management system in `malloc()` and `free()` (and friends). Basically, when you call `malloc()` the system marks of a chunk of memory as "used" and then gives it to you, and when you call `free()` it marks that chunk of memory as "unused" and does whatever it wants with it (like give it to another `malloc()` call, or (more modern) start zeroing it out for security and to speed up `calloc()`). It doesn't change the _pointer itself_ but it changes the _data that was pointed to_ (or it can, anyway, which is why the pointer shouldn't be used after being freed).
Chris Lutz
@ALL: And another question, so why wasnot the system designed to set the pointer null in the function `free()`...I thnk that will more convenient
Macroideal
As I said, `free(void *ptr)` _cannot_ change the value of the pointer it is passed. It can change the _contents_ of the pointer, the _data stored at that address_, but not the _address itself_, or the _value of the pointer_. That would require `free(void **ptr)` (which is apparently not allowed by the standard) or a macro (which is allowed and perfectly portable but people don't like macros). Also, C isn't about convenience, it's about giving programmers as much control as they want. If they don't want the added overhead of setting pointers to `NULL`, it shouldn't be forced on them.
Chris Lutz
There are few things in the world that give away the lack of professionalism on part of the C code author. But they include "checking the pointer for NULL before calling `free`" (along with such things as "casting the result of memory allocation functions" or "thoughtless using of type names with `sizeof`").
AndreyT
+1  A: 

Set the pointer that has just been freed to NULL is not mandatory but a good practice. In this way , you can avoid 1) using a freed pointed 2)free it towice

pierr
+1  A: 

Settings a pointer to NULL is to protect agains so-called double-free - a situation when free() is called more than once for the same address without reallocating the block at that address.

Double-free leads to undefined behaviour - usually heap corruption or immediately crashing the program. Calling free() for a NULL pointer does nothing and is therefore guaranteed to be safe.

So the best practice unless you now for sure that the pointer leaves scope immediately or very soon after free() is to set that pointer to NULL so that even if free() is called again it is now called for a NULL pointer and undefined behaviour is evaded.

sharptooth
+1  A: 

From the ANSI C standard:

void free(void *ptr);

The free function causes the space pointed to by ptr to be deallocated, that is, made available for further allocation. If ptr is a null pointer, no action occurs. Otherwise, if the argument does not match a pointer earlier returned by the calloc , malloc , or realloc function, or if the space has been deallocated by a call to free or realloc , the behavior is undefined.

"the undefined behavior" is almost always a program crash. So as to avoid this it is safe to reset the pointer to NULL. free() itself cannot do this as it is passed only a pointer, not a pointer to a pointer. You can also write a safer version of free() that NULLs the pointer:

void safe_free(void** ptr)
{
  free(*ptr);
  *ptr = NULL;
}
Vijay Mathew
Except that this is not "safer", as it masks program errors.
DrPizza
@Vijay Mathew, is it `ptr=NULL`, or `*ptr=NULL`????
Macroideal
@DrPizza - An error (in my opinion) is something that causes your program not to work as it's supposed to. If a hidden double free breaks your program, it's an error. If it works exactly how it was intended to, it's not an error.
Chris Lutz
@DrPizza: I've just found an argument why one should set it to `NULL` to avoid masking errors. http://stackoverflow.com/questions/1025589/setting-variable-to-null-after-free It seems like in either case some errors get hidden.
Georg
Be aware that a void pointer-to-pointer has its problems: http://c-faq.com/ptrs/genericpp.html
Secure
@Secure - Interesting. It seems that the best solution (IMHO) is the macro approach based on this FAQ.
Chris Lutz
@Chris, no, the best approach is code structure. Don't throw random mallocs and frees all over your codebase, keep related things together. The "module" that allocates a resource (memory, file, ...) is responsible for freeing it and has to provide a function for doing so that keeps care for the pointers, too. For any specific resource, you then have exactly one place where it is allocated and one place where it is released, both close together.
Secure
@Secure - Well, yeah, the best solution is always writing your program correctly in the first place, but that can't always happen. That's why people practice defensive coding habits.
Chris Lutz
@Chris, actually I'm talking about DRY - Don't Repeat Yourself, one of the most defensive coding habits I've learned. It can always be done, maybe except for the quick one-time-run hack scripts that have to be finished yesterday, but there you won't care for the pointers, too... and except when you have to maintain a legacy codebase... However, DRY is strongly related with modularization and code reuse and will nearly always effectively save you time and nerves in the long run.
Secure
@Chris Lutz: Hogwash. If you write code that frees the same pointer twice, your program has a logical error in it. Masking that logical error by making it not crash doesn't mean that the program is correct: it's still doing something nonsensical. There is no scenario in which writing a double free is justified.
DrPizza
+6  A: 

Most of the responses have focused on preventing a double free, but setting the pointer to NULL has another benefit. Once you free a pointer, that memory is available to be reallocated by another call to malloc. If you still have the original pointer around you might end up with a bug where you attempt to use the pointer after free and corrupt some other variable, and then your program enters an unknown state and all kinds of bad things can happen (crash if you're lucky, data corruption if you're unlucky). If you had set the pointer to NULL after free, any attempt to read/write through that pointer later would result in a segfault, which is generally preferable to random memory corruption.

For both reasons, it can be a good idea to set the pointer to NULL after free(). It's not always necessary, though. For example, if the pointer variable goes out of scope immediately after free(), there's not much reason to set it to NULL.

Mike McNertney
+1 This is actually a very good point. Not the reasoning about "double free" (which is completely bogus), but *this*. I'm not the fan of mechanical NULL-ing of pointers after `free`, but this actually makes sense.
AndreyT
A: 

There are two reasons:

Avoid crashes when double-freeing

Written by RageZ in a duplicate question.

The most common bug in c is the double free. Basically you do something like that

free(foobar);
/* lot of code */
free(foobar);

and it end up pretty bad, the OS try to free some already freed memory and generally it segfault. So the good practice is to set to NULL, so you can make test and check if you really need to free this memory

if(foobar != null){
  free(foobar);
}

also to be noted that free(NULL) won't do anything so you don't have to write the if statement. I am not really an OS guru but I am pretty even now most OSes would crash on double free.

That's also a main reason why all languages with garbage collection (Java, dotnet) was so proud of not having this problem and also not having to leave to developer the memory management as a whole.

Avoid using already freed pointers

Written by Martin v. Löwis in a another answer.

Setting unused pointers to NULL is a defensive style, protecting against dangling pointer bugs. If a dangling pointer is accessed after it is freed, you may read or overwrite random memory. If a null pointer is accessed, you get an immediate crash on most systems, telling you right away what the error is.

For local variables, it may be a little bit pointless if it is "obvious" that the pointer isn't accessed anymore after being freed, so this style is more appropriate for member data and global variables. Even for local variables, it may be a good approach if the function continues after the memory is released.

To complete the style, you should also initialize pointers to NULL before they get assigned a true pointer value.

Georg
Tye second point makes sense, the first one is bogus.
AndreyT
+1  A: 

I find this to be little help as in my experience when people access a freed memory allocation it's almost always because they have another pointer to it somewhere. And then it conflicts with another personal coding standard which is "Avoid useless clutter", so I don't do it as I think it rarely helps and makes the code slightly less readable.

However - I won't set the variable to null if the pointer isn't supposed to be used again, but often the higher level design gives me a reason to set it to null anyway. For example if the pointer is a member of a class and I've deleted what it points to then the "contract" if you like of the class is that that member will point to something valid at any time so it must be set to null for that reason. A small distinction but I think an important one.

In c++ it's important to always be thinking who owns this data when you allocate some memory (unless you are using smart pointers but even then some thought is required). And this process tends to lead to pointers generally being a member of some class and generally you want a class to be in a valid state at all times, and the easiest way to do that is to set the member variable to NULL to indicate it points to nothing now.

A common pattern is to set all the member pointers to NULL in the constructor and have the destructor call delete on any pointers to data that your design says that class owns. Clearly in this case you have to set the pointer to NULL when you delete something to indicate that you don't own any data before.

So to summarise, yes i often set the pointer to NULL after deleting something, but it's as part of a larger design and thoughts on who owns the data rather than due to blindly following a coding standard rule. I wouldn't do so in your example as I think there is no benefit to doing so and it adds "clutter" which in my experience is just as responsible for bugs and bad code as this kind of thing.

John Burton
+10  A: 

Setting a pointer to NULL after free is a dubious practice that is often popularized as a "good programming" rule on a patently false premise. It is one of those fake truths that belong to the "sounds right" category but in reality achieve absolutely nothing useful (and sometimes leads to negative consequences).

Allegedly, setting a pointer to NULL after free is supposed to prevent the dreaded "double free" problem when the same pointer value is passed to free more than once. In reality though, in 9 cases out of 10 the real "double free" problem occurs when different pointer objects holding the same pointer value are used as arguments for free. Needless to say, setting a pointer to NULL after free achieves absolutely nothing to prevent the problem in such cases.

Of course, it is possible to run into "double free" problem when using the same pointer object as an argument to free. However, in reality situations like that normally indicate a problem with the general logical structure of the code, not a mere accidental "double free". A proper way to deal with the problem in such cases is to review and rethink the structure of the code in order to avoid the satiation when the same pointer is passed to free more than once. In such cases setting the pointer to NULL and considering the problem "fixed" is nothing more than an attempt to sweep the problem under the carpet. It simply won't work in general case, because the problem with the code structure will always find another way to manifest itself.

Finally, if your code is specifically designed to rely on the pointer value being NULL or not NULL, it is perfectly fine to set the pointer value to NULL after free. But as a general "good practice" rule (as in "always set your pointer to NULL after free") it is, once again, a well-known and pretty useless fake, often followed by some for purely religious, voodoo-like reasons.

AndreyT
Definitely. I don't remember ever causing a double-free that would be fixed by setting the pointer to NULL after freeing, but I've caused plenty that would not.
LnxPrgr3
+1 for `However, in reality situations like that normally indicate a problem with the general logical structure of the code, not a mere accidental "double free".`
Dacav