views:

1729

answers:

4

We have an application with hundreds of possible user actions, and think about how enhancing memory leak testing.

Currently, here's the way it happens: When manually testing the software, if it appears that our application consumes too much memory, we use a memory tool, find the cause and fix it. It's a rather slow and not efficient process: the problems are discovered late and it relies on the good will of one developer.

How can we improve that?

  • Internally check that some actions (like "close file") do recover some memory and log it?
  • Assert on memory state inside our unit tests (but it seems this would be a tedious task) ?
  • Manually regularly check it from time to time?
  • Include that check each time a new user story is implemented?
A: 

In my company we have programmed an endless action path for our application. The java garbage collector should clean all unused maps and list and something like that. So we let the application start with the endless action path and look, whether the memory use size is growing.

The check which fields are not deleted you can use JProfiler for Java.

Markus Lausberg
+3  A: 

Which language?

I'd use a tool such as Valgrind, try to fully exercise the program and see what it reports.

Draemon
+2  A: 

first line of defense:

  • check list with common memory allocation related errors for developers
  • coding guidelines

second line of defense:

  • code reviews
  • static code analyis (as a part of build process)
  • memory profiling tools

If you work with unmanaged language (like C/C++) you can efficiently discover most of the memory leaks by hijacking memory management functions. For example you can track all memory allocations/deallocations.

aku
+1  A: 

It seems to me that the core of the problem is not so much finding memory leaks as knowing when to test for them. You say you have lots of user actions, but you don't say what sequences of user actions are meaningful. If you can generate meaningful sequences at random, I'd argue hard for random testing. On random tests you would measure

  • Code coverage (with gcov or valgrind)
  • Memory usage (with valgrind)
  • Coverage of the user actions themselves

By "coverage of user actions" I mean statements like the following:

  • For every pair of actions A and B, if there is a meaningful sequence of actions in which A is immediately followed by B, then we have tested such a sequence.

If that's not true, then you can ask for what fraction of pairs A and B it is true.

If you have the CPU cycles to afford it, you would probably also benefit from running valgrind or another memory-checking tool either before every commit to your source-code repository or during a nightly build.

Automate!

Norman Ramsey