views:

1433

answers:

11

I've just started experimenting with SDL in C++, and I thought checking for memory leaks regularly may be a good habit to form early on.

With this in mind, I've been running my 'Hello world' programs through Valgrind to catch any leaks, and although I've removed everything except the most basic SDL_Init() and SDL_Quit() statements, Valgrind still reports 120 bytes lost and 77k still reachable.

My question is: Is there an acceptable limit for memory leaks, or should I strive to make all my code completely leak-free?

+2  A: 

It depends on your application. Some leaking may be unavoidable (due to the time needed to find the leak v.s. deadlines). As long as your application can run as long as you want, and not take an crazy amount of memory in that time it's probably fine.

GavinCattell
+7  A: 

For a desktop application, small memory leaks are not a real problem. For services (servers) no memory leaks are acceptable.

Gamecat
This is mostly true, but server leaks can be mitigated a bit via "application recycling" (periodic restarts of server processes). Still, it is far better to clean it up.
Kristopher Johnson
I think such periodic restarts are just a variant on the debugging approach that says "the data at this point in the code is wrong, so change it" instead of finding what's making it wrong in the first place. I would never use a server that had to be periodically restarted.
rmeador
Servers can leak memory at startup. As long as it's a fixed startup overhead, it won't bring down the server. E.g. globals without a proper dtor don't hurt too much.
MSalters
Your statement is wrong. From a practical point of view memory leaks can be acceptable in servers as long as either: (a.) they aren't significant enough to impact users (b.) a strategy such as regular restarts can be employed without impacting availability.
johnstok
+7  A: 

If you are really worried about memory leaking, you will need to do some calculations.

You need to test your application for like, an hour and then calculate the leaked memory. This way, you get a leaked memory bytes/minute value.

Now, you will need to estimate the average length of the session of your program. For example, for notepad.exe, 15 minutes sounds like a good estimation for me.

If (average session length)*(leaked bytes / minute) > 0.3 * (memory space normally occupied by your process), then you should probably do some more efforts to reduce memory leaks. I just made up 0.3, use common sense to determine your acceptable threshold.

Remember that an important aspect of being a programmer is being a Software Engineer, and very often Engineering is about choosing the least worst option from two or more bad options. Maths always comes handy when you need to measure how bad an option is actually.

DrJokepu
+13  A: 

Be careful that Valgrind isn't picking up false positives in its measurements.

Many naive implementations of memory analyzers flag lost memory as a leak when it isn't really.

Maybe have a read of some of the papers in the external links section of the Wikipedia article on Purify. I know that the documentation that comes with Purify describes several scenarios where you get false positives when trying to detect memory leaks and then goes on to describe the techniques Purify uses to get around the issues.

BTW I'm not affiliated with IBM in any way. I've just used Purify extensively and will vouch for its effectiveness.

Edit: Here's an excellent introductory article covering memory monitoring. It's Purify specific but the discussion on types of memory errors is very interesting.

HTH.

cheers,

Rob

Rob Wells
+10  A: 

You have to be careful with the definition of "memory leak". Something which is allocated once on first use, and freed on program exit, will sometimes be shown up by a leak-detector, because it started counting before that first use. But it's not a leak (although it may be bad design, since it may be some kind of global).

To see whether a given chunk of code leaks, you might reasonably run it once, then clear the leak-detector, then run it again (this of course requires programmatic control of the leak detector). Things which "leak" once per run of the program usually don't matter. Things which "leak" every time they're executed usually do matter eventually.

I've rarely found it too difficult to hit zero on this metric, which is equivalent to observing creeping memory usage as opposed to lost blocks. I had one library where it got so fiddly, with caches and UI furniture and whatnot, that I just ran my test suite three times over, and ignored any "leaks" which didn't occur in multiples of three blocks. I still caught all or almost all the real leaks, and analysed the tricky reports once I'd got the low-hanging fruit out of the way. Of course the weaknesses of using the test suite for this purpose are (1) you can only use the parts of it that don't require a new process, and (2) most of the leaks you find are the fault of the test code, not the library code...

Steve Jessop
+1  A: 

As per Rob Wells' comments on Purify, download and try out some of the other tools out there. I use BoundsChecker and AQTime, and have seen different false positives in both over the years. Note that the memory leak might also be in a third party component, which you may want to exclude from your analysis. From example, MFC had a number of memory leaks in the first view versions.

IMO, memory leaks should be tracked down for any code that is going into a code base that may have a long life. If you can't track them down, at least make a note that they exist for the next user of the same code.

Shane MacLaughlin
+1  A: 

Firstable memory leaks are only a serious problem when they grow with time, otherwise the app just looks a little bigger from the outside (obviously there's a limit here too, hence the 'serious'). When you have a leak that grows with time you might be in trouble. How much trouble depends on the circumstances though. If you know where the memory is going and can make sure that you'll always have enough memory to run the program and everything else on that machine you are still somewhat fine. If you don't know where the memory is going however, i wouldn't ship the program and keep digging.

TheMarko
+6  A: 

Most OSes (including Windows) will give back all of a program's allocated memory when the program is unloaded. This includes any memory which the program itself may have lost track of.

Given that, my usual theory is that it's perfectly fine to leak memory during startup, but not OK to do it during runtime.

So really the question isn't if you are leaking any memory, it is if you are continually leaking it during your program's runtime. If you use your program for a while, and no matter what you do it stays at 120 bytes lost rather than increasing, I'd say you have done great. Move on.

T.E.D.
This is known as being in a 'steady state'. If a program reaches a steady state, it is not leaking
QBziZ
+10  A: 

Living with memory leaks (and other careless issues) is, at its best, (in my opinion) very bad programming. At its worst it makes software unusable.

You should avoid introducing them in the first place and run the tools you and others have mentioned to try to detect them.

Avoid sloppy programming - there are enough bad programmers out there already - the world doesn't need another one.

EDIT

I agree - many tools can provide false positives.

Tim
but, as i mention below, you have to be sure that they're actual leaks and not just false positives from a naively implemented system.
Rob Wells
Also, if you have a program that is going to run for three years at a stretch, then you cannot really afford to leak anything, and certainly not anything that isn't a one-time leakage.
Jonathan Leffler
A: 

It does look like SDL developers don't use Valgrind, but I basically only care about those 120 bytes lost.

With this in mind, I've been running my 'Hello world' programs through Valgrind to catch any leaks, and although I've removed everything except the most basic SDL_Init() and SDL_Quit() statements, Valgrind still reports 120 bytes lost and 77k still reachable.

Well, with Valgrind, "still reachable memory" is often not really leaked memory, especially in such a simple program. I can bet safely that there is basically no allocation in SDL_Quit(), so the "leaks" are just structures allocated once by SDL_Init().

Try adding useful work and seeing if those amounts increase; try making a loop of useful work (like creating and destroying some SDL structure) and see if the amount of leaks grows with the amount of iterations. In the latter case, you should check the stack traces of the leaks and fix them.

Otherwise, those 77k leaks count as "memory which should be freed at program end, but for which they rely on the OS to free it.

So, actually, I'm more worried right now by those 120 bytes, if they are not false positives, and they are usually few. False positives with Valgrind are mostly cases where usage of uninitialized memory is intended (for instance because it is actually padding).

Blaisorblade
A: 

With SDL on Linux in particular, there seem to be some leaking in the underlying X windows library. There's nothing much you can do about those (unless you want to try to fix the library itself, which is probably not for the faint hearted).

You can use valgrind's suppression mechanism (see --suppressions and --gen-suppressions in the valgrind man page) to tell it not to bother you with these errors.

In general we do have to be a little more lenient with third party libraries; while we should absolutely not accept memory leaks in our own code, and the presence of memory leaks should be a factor when choosing between alternative third party libraries, sometimes there's no choice but to ignore them (though it may be a good idea to report them to the library maintainer).

Kieron