views:

153

answers:

6

At my work we're discussing different approaches to cleaning up a large amount of managed ~50-100MB memory.There are two approaches on the table (read: two senior devs can't agree) and not having the experience the rest of the team is unsure of what approach is more desirable, performance or maintainability.

The data being collected is many small items, ~30000 which in turn contains other items, all objects are managed. There is a lot of references between these objects including event handlers but not to outside objects. We'll call this large group of objects and references as a single entity called a blob.

Approach #1: Make sure all references to objects in the blob are severed and let the GC handle the blob and all the connections.

Approach #2: Implement IDisposable on these objects then call dispose on these objects and set references to Nothing and remove handlers.

The theory behind the second approach is since the large longer lived objects take longer to cleanup in the GC. So, by cutting the large objects into smaller bite size morsels the garbage collector will processes them faster, thus a performance gain.

So I think the basic question is this: Does breaking apart large groups of interconnected objects optimize data for garbage collection or is better to keep them together and rely on the garbage collection algorithms to processes the data for you?

I feel this is a case of pre-optimization, but I do not know enough of the GC to know what does help or hinder it.

Edit: to add emphasis the "blob" of memory is not a single large object, it is many small objects allocated separately.

A little more background in case it is helpful. we had 'leaks' in that objects were not getting GCed. Both approaches solve the leak issue but at this point it is a debate between which is more appropriate.

+8  A: 

The second approach is faulty - It assumes that implementing IDisposable will impact the garbage collector.

Unfortunately, IDisposable has nothing to do with garbage collection. It is purely about releasing unmanaged resources. It sounds like your 2nd senior dev is trying to be a bit "too clever" for their own good.

The first approach should be fine. As soon as you stop referencing the "blob", every object within the blog will become unrooted, and it should get cleaned up. This may happen at some indeterminate time after you release the reference (unless you explicitly tell the GC to collection, which I don't recommend). The interdependencies will be handled correctly for you.

Suppose that implementing IDisposable and cleaning up the internal references could, theoretically, speed up the collection process. If there was a (small) net gain, the time spent processing all of that data will most likely outweigh any gains in the GC - and it really outside of your business concern.

However, I suspect it would actually slow down the garbage collector overall, not speed it up. Breaking up the data set into lots of objects will not help the GC run faster - it still has to track through the live references, which are no different in this situation.

Reed Copsey
The last thing you said is the most important -- GC cost is most affected by the NUMBER of live objects, and then by the SIZE of live objects. Cleaning up references actually only keeps the objects alive longer than necessary.
Ben Voigt
In other words, the second approach won't work. The first approach will; you can just call the garbage collector manually after removing all references to the blob.
Seun Osewa
Yeah - and the key is that it's the number of LIVE objects - tweaking the "dead" objects isn't going to change much.
Reed Copsey
Does the number of references effect performance at all? Or just the number of Objects?I think I understand, the garbage collector doesn't care about the collective size, but rather how many objects are in the blob. so trying to separate the references to each other within the blob doesn't really break down the object into smaller parts, at least not according to the GC.
Apeiron
The total number of references being used by ROOTED objects impacts the overall perf. of the GC - but the number of referneces (or even objects) that are unrooted really has no effect.
Reed Copsey
+2  A: 

The IDisposable interface has nothing to do with garbage collection.

It happens that some objects (like file streams) hold resources that can be precious (since the file descriptor limit for a process is usually much lower than the memory limit on modern operating systems). However, the garbage collector does not acknowledge them; and as thus, if you're running out of file descriptors but still have plenty of memory, the garbage collector might not run.

The IDisposable interface sets a mechanism by which you can rest assured that all unmanaged resources associated with a managed object will be released once the object actually becomes useless, and not only when the garbage collector decides to run.

Consequently, making objects IDisposable will not impact how objects are garbage-collected. Even using the Dispose method to clear all references will have little to no impact on garbage collector runs; just clearing the references to your blob will let all your smaller objects become unrooted at once.

zneak
+1  A: 

Microsoft imply that Dispose is faster than Finalize if you want performance for objects that hold unmanaged resources (file handles, GDI handles, etc). I don't think that is what you are trying to acheive (you haven't said anything about unmanaged resources).

Let the GC do its thing (as I type this, two other answers appear, saying the same thing, pretty much).

Stephen Kellett
"earlier" not "faster"
Ben Voigt
Earlier, yes. But also faster in some cases. Here is the quote from the Microsoft website."In some cases, you might want to provide programmers using an object with the ability to explicitly release these external resources before the garbage collector frees the object. If an external resource is scarce or expensive, better performance can be achieved if the programmer explicitly releases resources when they are no longer being used."
Stephen Kellett
+1  A: 

take a look at http://msdn.microsoft.com/en-us/magazine/cc534993.aspx

vittore
+1  A: 

Approach #2: Implement IDisposable on these objects then call dispose on these objects and set references to Nothing and remove handlers.

...

The theory behind the second approach is since the large longer lived objects take longer to cleanup in the GC. So, by cutting the large objects into smaller bite size morsels the garbage collector will processes them faster, thus a performance gain.

I think this is not true; garbage collectors' costs typically depend on number of living objects and their references, and the number of dead objects (depending on the type of GC). Once you don't need an object (or objects) and cut the reference paths from root objects to it/them, the number of references between the "garbage" objects doesn't matter. So, I'd say, just be sure there won't be dangling references from outside the "blobs" and you'll be OK.

jpalecek
GC performance in .NET does not depend on the number or size of unreachable objects *except* that 1) in general, the more objects are created the more often the GC has to run and 2) unreferenced objects *with a custom finalizer* have to be handled explicitly.
280Z28
@280Z28: Yeah, I considered garbage collectors in general, and in this case, the sweep phase of mark-and-sweep.
jpalecek
@280Z28: "GC performance in .NET does not depend on the number or size of unreachable objects". Sweeping more objects from gen2 will take longer.
Jon Harrop
+3  A: 

Neither approach makes sense. The GC has no trouble with detecting circular references or complicated object graphs. No point in setting references to null. IDisposable does nothing to improve GC perf.

If there's any lead in how you solved the problem, it is in setting events to null. They have a knack for keeping objects referenced if they are implemented "backwards". In other words: keeping the originator of the event alive and tearing down its clients. Unsubscribing then has to be done explicitly.

But trying to guess at this was the wrong approach to start with. Any decent memory profiler would have shown you what reference was keeping a graph alive.

Hans Passant