views:

687

answers:

7

Let's say I have the following C code:

int main () {
  int *p = malloc(10 * sizeof *p);
  *p = 42;
  return 0;  //Exiting without freeing the allocated memory
}

When I compile and execute that C program, ie after allocating some space in memory, will that memory I allocated be still allocated (ie basically taking up space) after I exit the application and the process terminates?

+3  A: 

Yes. The OS cleans up resources. Well ... old versions of NetWare didn't.

Edit: As San Jacinto pointed out, there are certainly systems (aside from NetWare) that do not do that. Even in throw-away programs, I try to make a habit of freeing all resources just to keep up the habit.

Mark Wilkins
I'm not downvoting, but this is a pretty dangers post for posterity. DOS is still used on many embedded platforms, and I SERIOUSLY doubt that it does the memory cleanup for you. The sweeping generalization is wrong.
San Jacinto
@San Jacinto: That is a good point. That is why I did make the NetWare reference, but it probably could use clarification. I'll edit it a bit.
Mark Wilkins
@San DOS is not a multi-tasking OS - when a DOS program (excluding TSRs) ends, all the memory is available for the next program to be loaded.
anon
@Neil thanks for the reminder, but I was referring to a TSR-like program that would launch when an event occurs, as is a common use for embedded systems. Nonetheless, thank you for your expertise and clarification where I failed :)
San Jacinto
A: 

That really depends on the operating system, but for all operating systems you'll ever encounter, the memory allocation will disappear when the process exits.

Graham Lee
+2  A: 

Yes, the operating system releases all memory when the process ends.

Draemon
I don't see why this was downvoted. malloc'ed memory will be released when the process dies (the wikipedia definiton of malloc says so)
Arve
+19  A: 

It depends on the operating system. The majority of modern (and all major) operating systems will free memory not freed by the program when it ends.

Relying on this is bad practice and it is better to free it explicitly. The issue isn't just that your code looks bad. You may decide you want to integrate your small program into a larger, long running one. Then a while later you have to spend hours tracking down memory leaks.
Relying on a feature of an operating system also makes the code less portable.

Yacoby
I once encountered win98 on an embedded platform, and based off of that experience, I can say that it does NOT free memory when programs close.
San Jacinto
"You may decide you want to ..." -- but compare: YAGNI.
Ken
@ken; It really is no work to free resources.
Chaoz
@Ken It was an example. Also, there is a line between YAGNI and sloppy coding. Not freeing resources crosses it. The YAGNI principle was also meant to be applied to features, not code that makes the program work correctly. (And not freeing memory is a bug).
Yacoby
+1: the most important thing to consider is that memory management is as Yacoby quite correctly states: _"a feature of the operating system"_. Unless I am mistaken, the programming language does not define what happens before or after program execution.
D.Shawley
anecdotal evidence why explicitly freeing can be bad: I wrote a command line utility which built a huge linked list; freeing the nodes on exit delayed program termination by several seconds(!), whereas not doing so allowed the OS to reclaim the memory in bulk...
Christoph
@Christoph I find it odd that the OS could free a large linked list so quickly, as the memory that the list occupies wouldn't be in alignment like a large array would be and as such this seems it would cause performance problems. Why is this not the case?
San Jacinto
Operating systems that support virtual memory don't go through the trouble of explicitly releasing memory. It is virtual, it just ceases to be.
Hans Passant
D.Shawley: The programming language doesn't define system calls or filesystems, either. A program that only uses exactly what the programming language spec defines is perfectly portable, and practically useless.
Ken
Freeing memory manually takes more time, takes more code, and introduces the possibility of bugs (tell me you've never seen a bug in deallocation code!). It's not "sloppy" to intentionally omit something which is worse in every way for your particular use case. Unless or until you mean to run it on some ancient/tiny system which can't free pages after process termination, or integrate it into a larger program (YAGNI), it looks like a net loss to me. I know it hurts a programmer's ego to think of not cleaning it up yourself, but in what practical way is it actually better?
Ken
@Ken: a lot of my experience is in an _unhosted_ environment so there is not an OS in the normal sense - there is a small kernel that provides some messaging and task control but little more in terms of services. If you do not release resources they are never released. I consider not releasing resources that you have acquired to be hubris. Assuming that _someone else will do it for you_ is not a habit that I readily espouse.
D.Shawley
@Ken: there is no better way to find out that code is corrupting the heap then by deallocating the memory. Flipping the ignore bit on heap corruption is unwise. Not cleaning up in the Release build is okay I guess.
Hans Passant
D.Shawley: Without an OS, you have to do a lot of things by hand. It didn't sound like the poser of this question was in such a situation, though.
Ken
nobugz: Finally, a plausible argument in favor of manual deallocation! If anything, though, I'd put it *on* in release builds, since unknown environments with unknown data is where I want *more* information about what's going on.
Ken
+2  A: 

It depends, operating systems will usually clean it up for you, but if you're working on for instance embedded software then it might not be released.

Just make sure you free it, it can save you a lot of time later when you might want to integrate it in to a large project.

Chaoz
A: 

What's happening here (in a modern OS), is that your program runs inside its own "process." This is an operating system entity that is endowed with its own address space, file descriptors, etc. Your malloc calls are allocating memory from the "heap", or unallocated memory pages that are assigned to your process.

When your program ends, as in this example, all of the resources assigned to your process are simply recycled/torn down by the operating system. In the case of memory, all of the memory pages that are assigned to you are simply marked as "free" and recycled for the use of other processes. Pages are a lower-level concept than what malloc handles-- as a result, the specifics of malloc/free are all simply washed away as the whole thing gets cleaned up.

It's the moral equivalent of, when you're done using your laptop and want to give it to a friend, you don't bother to individually delete each file. You just format the hard drive.

All this said, as all other answerers are noting, relying on this is not good practice:

  1. You should always be programming to take care of resources, and in C that means memory as well. You might end up embedding your code in a library, or it might end up running much longer than you expect.
  2. Some OSs (older ones and maybe some modern embedded ones) may not maintain such hard process boundaries, and your allocations might affect others' address spaces.
quixoto
+1  A: 

In general, modern general-purpose operating systems do clean up after you, but explicitly freeing anyway can be good practice for various reasons that others have given.

However, here is a reason to skip freeing memory: efficient shutdown. For example, suppose your application contains a large cache in memory. If when it exits it goes through the entire cache structure and frees it one piece at a time, that serves no useful purpose and wastes resources. Especially, consider the case where the memory pages containing your cache have been swapped to disk by the operating system; by walking the structure and freeing it you're bringing all of those pages back into memory, wasting significant time and energy for no actual benefit, and possibly even causing other programs on the system to get swapped out!

As a related example, there are high-performance servers that work by creating a process for each request, then having it exit when done; by this means they don't even have to track memory allocation, and never do any freeing or garbage collection at all, since everything just vanishes back into the operating system's free memory at the end of the process.

Kevin Reid