Is it ever acceptable to have a memory leak in your C or C++ application?

What if you allocate some memory and use it until the very last line of code in your application (for example, a global object's deconstructor)? As long as the memory consumption doesn't grow over time, is it OK to trust the OS to free your memory for you when your application terminates (on Windows, Mac, and Linux)? Would you even consider this a real memory leak if the memory was being used continuously until it was freed by the OS.

What if a third party library forced this situation on you? Would refuse to use that third party library no matter how great it otherwise might be?

I only see one practical disadvantage, and that is that these benign leaks will show up with memory leak detection tools as false positives.

+26  A: 

In theory no, in practise it depends.

It really depends on how much data the program is working on, how often the program is run and whether or not it is running constantly.

If I have a quick program that reads a small amount of data makes a calculation and exits, a small memory leak will never be noticed. Because the program is not running for very long and only uses a small amount of memory, the leak will be small and freed when the program exists.

On the other hand if I have a program that processes millions of records and runs for a long time, a small memory leak might bring down the machine given enough time.

As for third party libraries that have leaks, if they cause a problem either fix the library or find a better alternative. If it doesn't cause a problem, does it really matter?

I don't know if you read my whole question or not. I'm saying that the memory is used until the very end of the application. It doesn't grow with time. The only no no is that there isn't a call to free/delete.
Then it isn't really a memory leak. A memory leak is small amounts of unused but unfreed memory, this amount gets greater over time. What you are talking about is a memory droplet. Do not concern yourself with this unless your droplet is very large.
"If it doesn't cause a problem, does it really matter?" Nope, it doesn't matter at all. I wish more people got that instead of getting religious.
@Imbue -- don't ask a question if you don't want it to be answered.If you're fine with the memory pool or leak, bully for you. But many of us have had to work long hours correcting bugs a lazy developer had decided "doesn't cause a problem."
@John: That is generally less a question of lazy developers and more a question of evolving software. We all make mistakes, bugs are our trade; we make them we fix them, that is what we do. It is always a balance between upfront cost and long-term maintenance, that balance is never straightforward.
My point is that "religion" is there for a reason. Could I imagine a circumstance where I would release software with a memory leak or pool? Yes. Do I want to write on a public board that this is ok? No.
> [T]he memory is used until the very end of the application.If it's used, it's not a leak.> It doesn't grow with time.The growing is usually caused by the same code leaking multiple times.> [T]here isn't a call to free/delete.All modern OSes will free the memory on program exit.
Max Lybbert
Argh, formatting screwy in previous comment.
Max Lybbert
If you're using MFC (I'll assume the OP is since he mentions C and C++) memory leaks are pretty much unavoidable. I personally have tracked several right into MFC and had to just "let them go." In my expirence, ATL is better but more difficult to work with.
@John, it is a balance between cost and quality. Actually it is really straightforward. Do I want to write perfect code? Yes. Can customers afford perfect code? Generally no, or at least it isn't a good choice for them. It is a question of practicality and realism, here mem leaks are acceptable.
hmm I think I just contradicted myself, I should have chose a different word rather than 'straightforward' in my previous, previous comment to JohnMcG. It should probably read the balance is never simple.
John, I 100% agree with you.. Imbum The question is almost, "how much do you accept". Sloppy is sloppy.. How about I leave a shrimp behind your monitor. stink is stink. Every time we cave, our industry caves a bit. If you know there's a leak and you know you caused it, then you should fix it.
+48  A: 

I don't consider it to be a memory leak unless the amount of memory being "used" keeps growing. Having some unreleased memory, while not ideal, is not a big problem unless the amount of memory required keeps growing.

Jim C
Technically, it's still a leak because the rest of the system can't use that memory.
Bill the Lizard
Technically, a leak is memory that is allocated and all references to it are lost. Not deallocating the memory at the end is just lazy.
Martin York
If you have a 1-time memory leak of 4 GB, that's a problem.
John Dibling
Doesn't matter if it's growing or not. Other programs can't use the memory if you have it allocated.
Bill the Lizard
But his application is using that memory until it exits. I think he just means he didn't keep the initial pointer returned from the allocation. The object is still being useful "until the very last line of code in [the] application" so freeing it is not desired until the app exits.
@sk: Then that's perfectly okay. Whatever function uses the memory last should clean it up.
Bill the Lizard
I think the term for this situation is a memory "pool," capturing that there is some memory that has not been de-allocated, but it is not growing.
The application starts, it allocates memory, and the pointer is kept. From there a half dozen global objects use that pointer continuously and in their deconstructors. How can the last function free the memory?
> Other programs can't use the memory if you have it allocated.Well, the OS can always swap your memory to disk, and allow other applications to use the RAM you weren't taking advantage of.
Max Lybbert
Paging is not a desirable state to be in. You can't just let your programs hang on to whatever memory they want and count on the OS to bail you out. If you deallocate memory you're not using the OS doesn't need to spend time paging, which leads to better performance for all applications running.
Bill the Lizard
@Imbue: If you're using the memory up until the program ends, then you're doing it right.
Bill the Lizard
The idea that you say let paging take care of it.. Bleck. What if it's a gig of unkempt memory, and the app starts and stops 10 times.. Now your page file is full and your os crashes.. Please never apply for a job with the company I work for. BLECK!
If the program is very short-lived, then a leak might not be so bad. Also, while NOT ideal, paging isn't as expensive as some here make it out to be, because the program isn't interested in that memory (And thus wont be swapping all the time) - unless, of course, you have a GC...
Man, I read this and almost choked on my glass of water.
John Bellone
+1  A: 

Its really not a leak if its intentional and its not a problem unless its a significant amount of memory, or could grow to be a significant amount of memory. Its fairly common to not cleanup global allocations during the lifetime of a program. If the leak is in a server or long running app, grows over time, then its a problem.

Sanjaya R
+7  A: 

I'm sure that someone can come up with a reason to say Yes, but it won't be me. Instead of saying no, I'm going to say that this shouldn't be a yes/no question. There are ways to manage or contain memory leaks, and many systems have them.

There are NASA systems on devices that leave the earth that plan for this. The systems will automatically reboot every so often so that memory leaks will not become fatal to the overall operation. Just an example of containment.

That's actually an example of software aging. Fascinating subject of study.
Konrad Rudolph
+2  A: 

I think you've answered your own question. The biggest drawback is how they interfere with the memory leak detection tools, but I think that drawback is a HUGE drawback for certain types of applications.

I work with legacy server applications that are supposed to be rock solid but they have leaks and the globals DO get in the way of the memory detection tools. It's a big deal.

In the book "Collapse" by Jared Diamond, the author wonders about what the guy was thinking who cut down the last tree on Easter Island, the tree he would have needed in order to build a canoe to get off the island. I wonder about the day many years ago when that first global was added to our codebase. THAT was the day it should have been caught.

Corey Trager
+17  A: 

There is nothing conceptually wrong with having the os clean up after the application is run.

It really depends on the application and how it will be run. Continually occurring leaks in an application that needs to run for weeks has to be taken care of, but a small tool that calculates a result without too high of a memory need should not be a problem.

There is a reason why many scripting language do not garbage collect cyclical references… for their usage patterns, it's not an actual problem and would thus be as much of a waste of resources as the wasted memory.

About scripting languages: Python uses refcounting but has a GC just to free cyclical references. In other languages, the programmer often avoids explicitly cyclical references altogether, which creates other problems.
The earlier versions of PHP didn't release memory, they just ran from start to end growing in memory - after the typically 0.1 seconds of execution time, the script would exit, and all memory would be reclaimed.
+3  A: 

I see the same problem as all scenario questions like this: What happens when the program changes, and suddenly that little memory leak is being called ten million times and the end of your program is in a different place so it does matter? If it's in a library then log a bug with the library maintainers, don't put a leak into your own code.

In that case the impact of the memory leak changes, and you need to re-evaluate the priority of plugging the leak.
John Dibling
@John: You better at least document the leak then. Even then, I wouldn't trust someone to not ignore a big red flashing comment and copy-and-paste leaky code anyway. I prefer not to give someone the ability to do that in the first place.
+2  A: 

I'll answer no.

In theory, the operating system will clean up after you if you leave a mess (now that's just rude, but since computers don't have feelings it might be acceptable). But you can't anticipate every possible situation that might occur when your program is run. Therefore (unless you are able to conduct a formal proof of some behaviour), creating memory leaks is just irresponsible and sloppy from a professional point of view.

If a third-party component leaks memory, this is a very strong argument against using it, not only because of the imminent effect but also because it shows that the programmers work sloppily and that this might also impact other metrics. Now, when considering legacy systems this is difficult (consider web browsing components: to my knowledge, they all leak memory) but it should be the norm.

Konrad Rudolph
+4  A: 

this is so domain-specific that its hardly worth answering. use your freaking head.

  • space shuttle operating system: nope, no memory leaks allowed
  • rapid development proof-of-concept code: fixing all those memory leaks is a waste of time.

and there is a spectrum of intermediate situations.

the opportunity cost ($$$) of delaying a product release to fix all but the worst memory leaks is usually dwarfs any feelings of being "sloppy or unprofessional". Your boss pays you to make him money, not to get a warm, fuzzy feelings.

Dustin Getz
Very short-sighted attitude. You're basically saying that there is no need to use fundamentally sound programming practices until a defect is found to be caused by those practices. Problem is that software that is written using sloppy methods tends to have more defects than software that isn't.
John Dibling
I don't believe that all. And memory management is more complicated than writing clean methods.
Dustin Getz
Dustin obviously works in the real world like most of us, where we perpetually work against insane deadlines to keep up with the competition. So dealing with bugs should be done in a pragmatic way. By wasting too much time on unimportant bugs in unimportant programs, you won't get your stuff done.
Wouter van Nifterick
The problem with this attitude is: when do you start fixing the leaks? "OK, it's a powerplant, but it's just coal, not Uranium. Why fix leaks here?" - I learn in the real world that if you don't do the right thing from the very beginning, all the time, it just never happens.
The problem with this attitude is: when do you start fixing the leaks? *"OK, it's a powerplant, but it's just coal, not Uranium. Why fix leaks here?"* - I learnt in the real world that if you don't do the right thing from the very beginning, all the time, it just never happens. That attitude breeds projects that are "99% complete" after two weeks and remain so for two months.
+7  A: 

If you allocate memory and use it until the last line of your program, that's not a leak. If you allocate memory and forget about it, even if the amount of memory isn't growing, that's a problem. That allocated but unused memory can cause other programs to run slower or not at all.

Bill the Lizard
Not really, since if it's unused, it will just get paged out. When the app exits, all the memory is released.
As long as it's allocated other programs won't be able to use it. It won't get paged out if you don't deallocate it.
Bill the Lizard
Of course it will - that's what virtual memory is all about. You can have 1 GB of actual RAM, and yet have 4 processes each fully allocating 2 GB of virtual memory (so long as your page file is big enough).
Of course, you'll get nasty paging problems if each of those processes are actively using all that memory.
Okay, I understand what you're talking about now. If you deallocate memory you're not using, you'll reduce the need for paging. If you keep it allocated, your application will still keep it when it's paged back in.
Bill the Lizard
+4  A: 

You have to first realize that there's a big difference between a perceived memory leak and an actual memory leak. Very frequently analysis tools will report many red herrings, and label something as having been leaked (memory or resources such as handles etc) where it actually isn't. Often times this is due to the analysis tool's architecture. For example, certain analysis tools will report run time objects as memory leaks because it never sees those object freed. But the deallocation occurs in the runtime's shutdown code, which the analysis tool might not be able to see.

With that said, there will still be times when you will have actual memory leaks that are either very difficult to find or very difficult to fix. So now the question becomes is it ever OK to leave them in the code?

The ideal answer is, "no, never." A more pragmatic answer may be "no, almost never." Very often in real life you have limited number of resources and time to resolve and endless list of tasks. When one of the tasks is eliminating memory leaks, the law of diminishing returns very often comes in to play. You could eliminate say 98% of all memory leaks in an application in a week, but the remaining 2% might take months. In some cases it might even be impossible to eliminate certain leaks because of the application's architecture without a major refactoring of code. You have to weigh the costs and benefits of eliminating the remaining 2%.

John Dibling
+10  A: 

I think in your situation the answer may be that it's okay. But you definitely need to document that the memory leak is a conscious decision. You don't want a maintenance programmer to come along, slap your code inside a function, and call it a million times. So if you make the decision that a leak is okay you need to document it (IN BIG LETTERS) for whoever may have to work on the program in the future.

If this is a third party library you may be trapped. But definitely document that this leak occurs.

But basically if the memory leak is a known quantity like a 512 KB buffer or something then it is a non issue. If the memory leak keeps growing like every time you call a library call your memory increases by 512KB and is not freed, then you may have a problem. If you document it and control the number of times the call is executed it may be manageable. But then you really need documentation because while 512 isn't much, 512 over a million calls is a lot.

Also you need to check your operating system documentation. If this was an embedded device there may be operating systems that don't free all the memory from a program that exits. I'm not sure, maybe this isn't true. But it is worth looking into.

"But you definitely need to document that the memory leak is a conscious decision." Thank heavens. The best point made so far.
que que
+2  A: 

I agree with vfilby – it depends. In Windows, we treat memory leaks as relatively serous bugs. But, it very much depends on the component.

For example, memory leaks are not very serious for components that run rarely, and for limited periods of time. These components run, do theire work, then exit. When they exit all their memory is freed implicitly.

However, memory leaks in services or other long run components (like the shell) are very serious. The reason is that these bugs 'steal' memory over time. The only way to recover this is to restart the components. Most people don't know how to restart a service or the shell – so if their system performance suffers, they just reboot.

So, if you have a leak – evaluate its impact two ways

  1. To your software and your user's experience.
  2. To the system (and the user) in terms of being frugal with system resources.
  3. Impact of the fix on maintenance and reliability.
  4. Likelihood of causing a regression somewhere else.


3. Impact on maintenance of the software.
+2  A: 

Historically, it did matter on some operating systems under some edge cases. These edge cases could exist in the future.

Here's an example, on SunOS in the Sun 3 era, there was an issue if a process used exec (or more traditionally fork and then exec), the subsequent new process would inherit the same memory footprint as the parent and it could not be shrunk. If a parent process allocated 1/2 gig of memory and didn't free it before calling exec, the child process would start using that same 1/2 gig (even though it wasn't allocated). This behavior was best exhibited by SunTools (their default windowing system), which was a memory hog. Every app that it spawned was created via fork/exec and inherited SunTools footprint, quickly filling up swap space.

+151  A: 


As professionals, the question we should not be asking ourselves the question, "Is it ever OK to do this?" but rather "Is there ever a good reason to do this?" And "hunting down that memory leak is a pain" isn't a good reason.

I like to keep things simple. And the simple rule is that my program should have no memory leaks.

That makes my life simple, too. If I detect a memory leak, I eliminate it, rather than run through some elaborate decision tree structure to determine whether it's an "acceptable" memory leak.

It's similar to compiler warnings – will the warning be fatal to my particular application? Maybe not.

But it's ultimately a matter of professional discipline. Tolerating compiler warnings and tolerating memory leaks is a bad habit that will ultimately bite me in the rear.

To take things to an extreme, would it ever be acceptable for a surgeon to leave some piece of operating equipment inside a patient?

Although it is possible that a circumstance could arise where the cost/risk of removing that piece of equipment exceeds the cost/risk of leaving it in, and there could be circumstances where it was harmless, if I saw this question posed on and saw any answer other than "no," it would seriously undermine my confidence in the medical profession.

If a third party library forced this situation me, it would lead me to seriously suspect the overall quality of the library in question. It would be as if I test drove a car and found a couple loose washers and nuts in one of the cupholders – it may not be a big deal in itself, but it betrays a lack of commitment to quality, so I would consider alternatives.

True and not true at the same time. Ultimate most of us are wage slaves and any desire for craftsmanship must take a back seat to the requirements of the business. If that 3rd party library has a leak and saves 2 weeks of work, there may be a business case to use it, etc...
I would use the library anyway, if it was something I needed and there were no decent alternatives, but I would log a bug with the maintainers.
While I'd personally go with exactly the same answer, there are programs that hardly free memory at all. The reason is that they are a) intended to run on OSes that free memory, and b) designed not to run very long. Rare constraints for a program indeed, but I accept this as perfectly valid.
Basically all mainstream OSes free memory, except when you have shared InterProcess objects and reference counting is used for them (i.e. COM on Windows, for instance). Even DOS, I think, did free memory. I would be curious to know exceptions :-)
Blaisorblade haha that's a funny one!
Ray Hidayat
It's an ok answer, but the OP is describing a (possibly) memory-inefficient program, not a memory leak.
Robert Paulson
as an example, python was (is?) notorious for leaking memory. choosing not to use it out of principle would be foolish.
Dustin Getz
To add some reasons for early checking: when your debugging tools are flooded with "benign" leaks, how are you going to find the "real" one? If you add a batch feature, and suddenly your 1K/hour leak becomes an 1K / second?
@Dustin: Like C++, it is difficult in python to manage memory in the presence of reference cycles, due to the reference counting scheme it uses.
Perfect is the enemy of good.
David Plumpton
Hmm is "not leaking memory" "perfect"?
@JohnMcG - pertaining to memory leaks; yes. Perfect: being complete of its kind, without defect.
And pertaining to deaths, having a live patient at the end of surgery is "perfect" as well.
+1  A: 

I totally agree with JohnMcG, and just want to add that I have myself had problems to discover real, potentially serious memory leaks in time, just because it have been accepted to have the benign ones. When these have grown to be so many over time, it becomes more and more difficult to detect the serious ones in the flood of the benign ones.

So at least for your fellow programmers sake (and also for yourself in the future), please try to eleminate them as soon as possible.

Stefan Rådström
+2  A: 

In this sort of question context is everything. Personally I can't stand leaks, and in my code I go to great lengths to fix them if they crop up, but it is not always worth it to fix a leak, and when people are paying me by the hour I have on occasion told them it was not worth my fee for me to fix a leak in their code. Let me give you an example:

I was triaging a project, doing some perf work and fixing a lot of bugs. There was a leak during the applications initialization that I tracked down, and fully understood. Fixing it properly would have required a day or so refactoring a piece of otherwise functional code. I could have done something hacky (like stuffing the value into a global and grabbing it some point I know it was no longer in use to free), but that would have just caused more confusion to the next guy who had to touch the code.

Personally I would not have written the code that way in the first place, but most of us don't get to always work on pristine well designed codebases, and sometimes you have to look at these things pragmatically. The amount of time it would have taken me to fix that 150 byte leak could instead be spent making algorithmic improvements that shaved off megabytes of ram.

Ultimately, I decided that leaking 150 bytes for an app that used around a gig of ram and ran on a dedicated machine was not worth fixing it, so I wrote a comment saying that it was leaked, what needed to be changed in order to fix it, and why it was not worth it at the time.

Louis Gerbarg
+2  A: 

Even if you are sure that your 'known' memory leak will not cause havoc, don't do it. At best, it will pave a way for you to make a similar and probably more critical mistake at a different time and place.

For me, asking this is like questioning "Can I break the red light at 3 AM in the morning when no one is around?". Well sure, it may not cause any trouble at that time but it will provide a lever for you to do the same in rush hour!

+6  A: 

I can count on one hand the number of "benign" leaks that I've seen over time.

So the answer is a very qualified yes.

An example. If you have a singleton resource that needs a buffer to store a circular queue or deque but doesn't know how big the buffer will need to be and can't afford the overhead of locking or every reader, then allocating an exponentially doubling buffer but not freeing the old ones will leak a bounded amount of memory per queue/deque. The benefit for these is they speed up every access dramatically and can change the asymptotics of multiprocessor solutions by never risking contention for a lock.

I've seen this approach used to great benefit for things with very clearly fixed counts such as per-CPU work-stealing deques, and to a much lesser degree in the buffer used to hold the singleton /proc/self/maps state in Hans Boehm's conservative garbage collector for C/C++, which is used to detect the root sets, etc.

While technically a leak, both of these cases are bounded in size and in the growable circular work stealing deque case there is a huge performance win in exchange for a bounded factor of 2 increase in the memory usage for the queues.

Edward Kmett
+6  A: 

If you allocate a bunch of heap at the beginning of your program, and you don't free it when you exit, that is not a memory leak per se. A memory leak is when your program loops over a section of code, and that code allocates heap and then "loses track" of it without freeing it.

In fact, there is no need to make calls to free() or delete right before you exit. When the process exits, all of its memory is reclaimed by the OS (this is certainly the case with POSIX. On other OSes – particularly embedded ones – YMMV).

The only caution I'd have with not freeing the memory at exit time is that if you ever refactor your program so that it, for example, becomes a service that waits for input, does whatever your program does, then loops around to wait for another service call, then what you've coded can turn into a memory leak.

I beg to differ. That *is* “a memory leak per se”.
Konrad Rudolph
It's not a leak until you "lose" the reference to the object. Presumably, if the memory is used for the lifetime of the program, then it's not leaked. If the reference is not lost until exit() is called, then it is absolutely *not* a leak.
Amiga DOS was the last O/S I looked at that didn't clean up behind processes. Be aware, though, that System V IPC shared memory can be left around even if no process is using it.
Jonathan Leffler
Palm doesn't free memory "leaked" until you hotsync. it came well after the amiga. I've run apps on my palm emulator that had leaks.. Never did they make their way to my actual palm.
+3  A: 

This was already discussed ad nauseam. Bottom line is that a memory leak is a bug and must be fixed. If a third party library leaks memory, it makes one wonder what else is wrong with it, no? If you were building a car, would you use an engine that is occasionally leaking oil? After all, somebody else made the engine, so it's not your fault and you can't fix it, right?

But if you owned a car with an engine that occasionally leaks oil, do you spend money to fix it, or do you keep an eye on the oil levels and top it up from time to time. The answer depends on all kinds of factors.
This is not about owning a car. This is about building a car. If you get a third-party library with memory leaks and you absolutely have to use it, then you live with it. But if you are the one writing a system or a library, it is your responsibility to make sure it is bug-free.
+1 treat it like any other bug.(That doesn't mean "fix instantly" in my book, but "needs to befixed" for sure)
+2  A: 

Generally a memory leak in a stand alone application is not fatal, as it gets cleaned up when the program exits.

What do you do for Server programs that are designed so they don't exit?

If you are the kind of programmer that does not design and implement code where the resources are allocated and released correctly, then I don't want anything to do with you or your code. If you don't care to clean up your leaked memory, what about your locks? Do you leave them hanging out there too? Do you leave little turds of temporary files laying around in various directories?

Leak that memory and let the program clean it up? No. Absolutely not. It's a bad habit, that leads to bugs, bugs, and more bugs.

Clean up after yourself. Yo momma don't work here no more.

I have worked on server programs that deliberately use processes rather than threads, so that memory leaks and segmentation faults cause limited damage.
Interesting approach.I would be a bit concerned about processes that fail to exit and continue to gobble up memory.
+1  A: 

It looks like your definition of "memory leak" is "memory that I don't clean up myself." All modern OSes will free it on program exit. However, since this is a C++ question, you can simply wrap the memory in question inside an appropriate std::auto_ptr which will call delete when it goes out of scope.

Max Lybbert
+24  A: 

Let's get our definitions correct, first. A memory leak is when memory is dynamically allocated, eg with malloc(), and all references to the memory are lost without the corresponding free. An easy way to make one is like this:

#define BLK ((size_t)1024)
    void * vp = malloc(BLK);

Note that every time around the while(1) loop, 1024 (+overhead) bytes are allocated, and the new address assigned to vp; there's no remaining pointer to the previous malloc'ed blocks. This program is guaranteed to run until the heap runs out, and there's no way to recover any of the malloc'ed memory. Memory is "leaking" out of the heap, never to be seen again.

What you're describing, though, sound like

int main(){
    void * vp = malloc(LOTS);
    // Go do something useful
    return 0;

You allocate the memory, work with it until the program terminates. This is not a memory leak; it doesn't impair the program, and all the memory will be scavenged up automagically when the program terminates.

Generally, you should avoid memory leaks. First, because like altitude above you and fuel back at the hangar, memory that has leaked and can't be recovered is useless; second, it's a lot easier to code correctly, not leaking memory, at the start than it is to find a memory leak later.

Charlie Martin
Now consider a few dozen of this allocations. Now consider having to move the "main" body to routine that gets called multiple times. Enjoy. - I agree with the sentiment that it's nto such a big problem in this scenario, but scenarios change. As they say, always write code as if the guy to maintain it knows where you live.
Well, the point is that memory that is malloc'ed and held until the program calls _exit() isn't "leaked".
Charlie Martin
It is a memory leak and it can impair your program. Future allocations can fail from this proces because I am sure you are checking that malloc returned non nil everywhere. by over using memory, such as in an embedded situation where memor is scarce this could be the difference between life and death.
Mike, that's just not true. In a compliant C environment, ending main frees all process resources. In an embedded environment like you describe, you might see that situation, but you wouldn't have a main. Now, I'll grant that there might be flawed embedded environments for which this wouldn't be true, but then I've seen flawed environments that couldn't cope with += correctly too.
Charlie Martin
+10  A: 

I believe the answer is no, never allow a memory leak, and I have a few reasons which I haven't seen explicitly stated. There are great technical answers here but I think the real answer hinges on more social/human reasons.

(First, note that as others mentioned, a true leak is when your program, at any point, loses track of memory resources that it has allocated. In C, this happens when you malloc() to a pointer and let that pointer leave scope without doing a free() first.)

The important crux of your decision here is habit. When you code in a language that uses pointers, you're going to use pointers a lot. And pointers are dangerous; they're the easiest way to add all manner of severe problems to your code.

When you're coding, sometimes you're going to be on the ball and sometimes you're going to be tired or mad or worried. During those somewhat distracted times, you're coding more on autopilot. The autopilot effect doesn't differentiate between one-off code and a module in a larger project. During those times, the habits you establish are what will end up in your code base.

So no, never allow memory leaks for the same reason that you should still check your blind spots when changing lanes even if you're the only car on the road at the moment. During times when your active brain is distracted, good habits are all that can save you from disastrous missteps.

Beyond the "habit" issue, pointers are complex and often require a lot of brain power to track mentally. It's best to not "muddy the water" when it comes to your usage of pointers, especially when you're new to programming.

There's a more social aspect too. By proper use of malloc() and free(), anyone who looks at your code will be at ease; you're managing your resources. If you don't, however, they'll immediately suspect a problem.

Maybe you've worked out that the memory leak doesn't hurt anything in this context, but every maintainer of your code will have to work that out in his head too when he reads that piece of code. By using free() you remove the need to even consider the issue.

Finally, programming is writing a mental model of a process to an unambiguous language so that a person and a computer can perfectly understand said process. A vital part of good programming practice is never introducing unnecessary ambiguity.

Smart programming is flexible and generic. Bad programming is ambiguous.

Jason L
I love the habit idea. I also agree. If I see a memory leak, I always wonder what other corner did the coder cut. Especially if it's obvious
+16  A: 

Many people seem to be under the impression that once you free memory, it's instantly returned to the operating system and can be used by other programs.

This isn't true. Operating systems commonly manage memory in 4KiB pages. malloc and other sorts of memory management get pages from the OS and sub-manage them as they see fit. It's quite likely that free() will not return pages to the operating system, under the assumption that your program will malloc more memory later.

I'm not saying that free() never returns memory to the operating system. It can happen, particularly if you are freeing large stretches of memory. But there's no guarantee.

The important fact: If you don't free memory that you no longer need, further mallocs are guaranteed to consume even more memory. But if you free first, malloc might re-use the freed memory instead.

What does this mean in practice? It means that if you know your program isn't going to require any more memory from now on (for instance it's in the cleanup phase), freeing memory is not so important. However if the program might allocate more memory later, you should avoid memory leaks - particularly ones that can occur repeatedly.

Also see this comment for more details about why freeing memory just before termination is bad.

A commenter didn't seem to understand that calling free() does not automatically allow other programs to use the freed memory. But that's the entire point of this answer!

So, to convince people, I will demonstrate an example where free() does very little good. To make the math easy to follow, I will pretend that the OS manages memory in 4000 byte pages.

Suppose you allocate ten thousand 100-byte blocks (for simplicity I'll ignore the extra memory that would be required to manage these allocations). This consumes 1MB, or 250 pages. If you then free 9000 of these blocks at random, you're left with just 1000 blocks - but they're scattered all over the place. Statistically, about 5 of the pages will be empty. The other 245 will each have at least one allocated block in them. That amounts to 980KB of memory, that cannot possibly be reclaimed by the operating system - even though you now only have 100KB allocated!

On the other hand, you can now malloc() 9000 more blocks without increasing the amount of memory your program is tying up.

Even when free() could technically return memory to the OS, it may not do so. free() needs to achieve a balance between operating quickly and saving memory. And besides, a program that has already allocated a lot of memory and then freed it is likely to do so again. A web server needs to handle request after request after request - it makes sense to keep some "slack" memory available so you don't need to ask the OS for memory all the time.

What if, other programs require the memory which your program is holding up unnecessarily, hence even though you might not need any more mallocs, free() the unused memory spaces :)
Mohit Nanda
You've totally missed my point. When you free() memory, other programs cannot use it!! (Sometimes they can, particularly if you free large blocks of memory. But often, they can't!) I will edit my post to make this clearer.
+2  A: 

As a general rule, if you've got memory leaks that you feel you can't avoid, then you need to think harder about object ownership.

But to your question, my answer in a nutshell is In production code, yes. During development, no. This might seem backwards, but here's my reasoning:

In the situation you describe, where the memory is held until the end of the program, it's perfectly okay to not release it. Once your process exits, the OS will clean up anyway. In fact, it might make the user's experience better: In a game I've worked on, the programmers thought it would be cleaner to free all the memory before exiting, causing the shutdown of the program to take up to half a minute! A quick change that just called exit() instead made the process disappear immediately, and put the user back to the desktop where he wanted to be.

However, you're right about the debugging tools: They'll throw a fit, and all the false positives might make finding your real memory leaks a pain. And because of that, always write debugging code that frees the memory, and disable it when you ship.

+1  A: 

It really depends upon the usage of the object that creating the memory leak. If you are creating the object so many times in the life time of the application that is using the object, then it is bad to use that way. Because so much memory leak will be there. On the other hand if we have a single instance of object without consuming the memory and leaking only in terms of small amount then it is not a problem.

Memory leak is a problem when the leak increases when the application is running.


As long as your memory utilization doesn't increase over time, it depends. If you're doing lots of complex synchronization in server software, say starting background threads that block on system calls, doing clean shutdown may be too complex to justify. In this situation the alternatives may be:

  1. Your library that doesn't clean up its memory until the process exits.
  2. You write an extra 500 lines of code and add another mutex and condition variable to your class so that it can shut down cleanly from your tests – but this code is never used in production, where the server only terminates by crashing.
+3  A: 

No, you should not have leaks that the OS will clean for you. The reason (not mentioned in the answers above as far as I could check) is that you never know when your main() will be re-used as a function/module in another program. If your main() gets to be a frequently-called function in another persons' software - this software will have a memory leak that eats memory over time.



When an application shuts down, it can be argued that it is best to not free memory.

In theory, the OS should release the resources used by the application, but there is always some resources that are exceptions to this rule. So beware.

The good with just exiting the application:

  1. The OS gets one chunk to free instead of many many small chunks. This means shutdown is much much faster. Especially on Windows with it's slow memory management.

The bad with just exiting is actually two points:

  1. It is easy to forget to release resources that the OS does not track or that the OS might wait a bit with releasing. One example is TCP sockets.
  2. Memory tracking software will report everything not freed at exit as leaks.

Because of this, you might want to have two modes of shutdown, one quick and dirty for end users and one slow and thorough for developers. Just make sure to test both :)

Jørn Jensen

Only in one instance: The program is going to shoot itself due to an unrecoverable error.

Steve Lacey

The best practice is to always free what you allocate, especially if writing something that is designed to run during the entire uptime of a system, even when cleaning up prior to exiting.

Its a very simple rule .. programming with the intention of having no leaks makes new leaks easy to spot. Would you sell someone a car that you made knowing that it sputtered gas on the ground ever time it was turned off? :)

A few if () free() calls in a cleanup function are cheap, why not use them?

Tim Post
+2  A: 

While most answers concentrate on real memory leaks (which are not OK ever, because they are a sign of sloppy coding), this part of the question appears more interesting to me:

What if you allocate some memory and use it until the very last line of code in your application (for example, a global object's deconstructor)? As long as the memory consumption doesn't grow over time, is it OK to trust the OS to free your memory for you when your application terminates (on Windows, Mac, and Linux)? Would you even consider this a real memory leak if the memory was being used continuously until it was freed by the OS.

If the associated memory is used, you cannot free it before the program ends. Whether the free is done by the program exit or by the OS does not matter. As long as this is documented, so that change don't introduce real memory leaks, and as long as there is no C++ destructor or C cleanup function involved in the picture. A not-closed file might be revealed through a leaked FILE object, but a missing fclose() might also cause the buffer not to be flushed.

So, back to the original case, it is IMHO perfectly OK in itself, so much that Valgrind, one of the most powerful leak detectors, will treat such leaks only if requested. On Valgrind, when you overwrite a pointer without freeing it beforehand, it gets considered as a memory leak, because it is more likely to happen again and to cause the heap to grow endlessly.

Then, there are not nfreed memory blocks which are still reachable. One could make sure to free all of them at the exit, but that is just a waste of time in itself. The point is if they could be freed before. Lowering memory consumption is useful in any case.

Wow... someone who knows what a memory leak is.
Simon Buchan

If you are using it up until the tail of your main(), it is simply not a leak (assuming a protected memory system, of course!).

In fact, freeing objects at process shutdown is the absolute worst thing you could do... the OS has to page back in every page you have ever created. Close file handles, database connections, sure, but freeing memory is just dumb.

Simon Buchan

If your code has any memory leaks, even known "acceptable" leaks, then you will have an annoying time using any memory leak tools to find your "real" leaks. Just like leaving "acceptable" compiler warnings makes finding new, "real" warnings more difficult.

+1  A: 

Is it ok for you to go to a mates house, have a hell of a party and leave without cleaning up?

Ron Elliott
Yes, if you're about to burn the place down anyway, go for it!
Zan Lynx
What if you know that The Cleaners are coming, and that whatever you do has no consequences? Furthermore... Your mate will never even know.

No, they are not O.K., but I've implemented a few allocators, memory dumpers, and leak detectors, and have found that as a pragmatic matter it's convenient to allow one to mark such an allocation as "Not a Leak as far as the Leak Report is concerned"...

This helps make the leak report more useful... and not crowded with "dynamic allocation at static scope not free'd by program exit"


Splitting hairs perhaps: what if your app is running on UNIX and can become a zombie? In this case the memory does not get reclaimed by the OS. So I say you really should de-allocate the memory before the program exits.

Eric M

Its perfectly acceptable to omit freeing memory on the last line of the program since freeing it would have no effect on anything since the program never needs memory again.


Memory leaks are Okay, if you are an experienced professional fighting a war and literally developing under fire for 18 hours a day. Same for releasing a debug EXE.

Pavel Radzivilovsky
+1  A: 

I believe it is okay if you have a program that will run for a matter of seconds and then quit and it is just for personal use. Any memory leaks will be cleaned up as soon as your program ends.

The problem comes when you have a program that runs for along time and users rely on it. Also it is bad coding habit to let memory leaks exist in your program especially for work if they may turn that code into something else someday.

All in all its better to remove memory leaks.

+1  A: 

I took one class in high school on C and the teacher said always make sure to free when you malloc.

But when I took another course college the Professor said it was ok not to free for small programs that only run for a second. So I suppose it doesn't hurt your program, but it is good practice to free for strong, healthy code.


I'm going to give the unpopular but practical answer that it's always wrong to free memory unless doing so will reduce the memory usage of your program. For instance a program that makes a single allocation or series of allocations to load the dataset it will use for its entire lifetime has no need to free anything. In the more common case of a large program with very dynamic memory requirements (think of a web browser), you should obviously free memory you're no longer using as soon as you can (for instance closing a tab/document/etc.), but there's no reason to free anything when the user selects clicks "exit", and doing so is actually harmful to the user experience.

Why? Freeing memory requires touching memory. Even if your system's malloc implementation happens not to store metadata adjacent to the allocated memory blocks, you're likely going to be walking recursive structures just to find all the pointers you need to free.

Now, suppose your program has worked with a large volume of data, but hasn't touched most of it for a while (again, web browser is a great example). If the user is running a lot of apps, a good portion of that data has likely been swapped to disk. If you just exit(0) or return from main, it exits instantly. Great user experience. If you go to the trouble of trying to free everything, you may spend 5 seconds or more swapping all the data back in, only to throw it away immediately after that. Waste of user's time. Waste of laptop's battery life. Waste of wear on the hard disk.

This is not just theoretical. Whenever I find myself with too many apps loaded and the disk starts thrashing, I don't even consider clicking "exit". I get to a terminal as fast as I can and type killall -9 ... because I know "exit" will just make it worse.


Some time ago I would have said yes, that it was sometime acceptable to let some memory leaks in your program (it is still on rapid prototyping) but having made now 5 or 6 times the experience that tracking even the least leak revealed some really severe functional errors. Letting a leak in a program happens when the life cycle of a data entity is not really known, showing a crass lack of analysis. So in conclusion, it is always a good idea to know what happens in a program.


Think of the case that the application is later used from another, with the possibilities to open several of them in separate windows or after each other to do something. If it is not run a a process, but as a library, then the calling program leak memory because you thought you cold skip the memory cleanup.

Use some sort of smart pointer that does it for you automatically (e.g. scoped_ptr from Boost libs)

Marius K

I guess it's fine if you're writing a program meant to leak memory (i.e. to test the impact of memory leaks on system performance).