I wrote an automated test in order to ensure that our application does not leak (managed & unmanaged memory) now AND later as development grows up. The problem is that my test does NOT seem reliable, but I don't know whether this is inherent to .NET and leak definition or to the test.
It happens this way:
long start = PrivateBytes;
// here is the code for an action which is expected to be memory-constant
long difference= PrivateBytes-start;
Console.WriteLine(difference); // or Assert(difference < MyLimit);
private static long PrivateBytes
{
get
{
GC.Collect();
GC.WaitForPendingFinalizers();
GC.Collect();
return Process.GetCurrentProcess().PrivateMemorySize64;
}
}
My question is: why do I get huge variations in difference? (example: one run gives 11Mo, the next one 33 Mo). Are these variations normal or can I remove them?
Precision: I am NOT looking for a profiler tool! (I already use one!)