At my work we're discussing different approaches to cleaning up a large amount of managed ~50-100MB memory.There are two approaches on the table (read: two senior devs can't agree) and not having the experience the rest of the team is unsure of what approach is more desirable, performance or maintainability.
The data being collected is many small items, ~30000 which in turn contains other items, all objects are managed. There is a lot of references between these objects including event handlers but not to outside objects. We'll call this large group of objects and references as a single entity called a blob.
Approach #1: Make sure all references to objects in the blob are severed and let the GC handle the blob and all the connections.
Approach #2: Implement IDisposable on these objects then call dispose on these objects and set references to Nothing and remove handlers.
The theory behind the second approach is since the large longer lived objects take longer to cleanup in the GC. So, by cutting the large objects into smaller bite size morsels the garbage collector will processes them faster, thus a performance gain.
So I think the basic question is this: Does breaking apart large groups of interconnected objects optimize data for garbage collection or is better to keep them together and rely on the garbage collection algorithms to processes the data for you?
I feel this is a case of pre-optimization, but I do not know enough of the GC to know what does help or hinder it.
Edit: to add emphasis the "blob" of memory is not a single large object, it is many small objects allocated separately.
A little more background in case it is helpful. we had 'leaks' in that objects were not getting GCed. Both approaches solve the leak issue but at this point it is a debate between which is more appropriate.