views:

100

answers:

4

I have a problem with memory leaks in my .NET Windows service application. So I've started to read articles about memory management in .NET. And i have found an interesting practice in one of Jeffrey Richter articles. This practice name is "object resurrection". It looks like situating code that initializes global or static variable to "this":

protected override void Finalize() {
     Application.ObjHolder = this;
     GC.ReRegisterForFinalize(this);
}

I understand that this is a bad practice, however i would like to know patterns that uses this practice. If you know any, please write here.

+2  A: 

The only place I can think of using this, potentially, would be when you were trying to cleanup a resource, and the resource cleanup failed. If it was critical to retry the cleanup process, you could, technically, "ReRegister" the object to be finalized, which hopefully would succeed, the second time.

That being said, I'd avoid this altogether in practice.

Reed Copsey
+1  A: 

From the same article: "There are very few good uses of resurrection, and you really should avoid it if possible."

The best use I can think of is a "recycling" pattern. Consider a Factory that produces expensive, practically immutable objects; for instance, objects instantiated by parsing a data file, or by reflecting an assembly, or deeply copying a "master" object graph. The results are unlikely to change each time you perform this expensive process. It is in your best interest to avoid instantiation from scratch; however, for some design reasons, the system must be able to create many instances (no singletons), and your consumers cannot know about the Factory so that they can "return" the object themselves; they may have the object injected, or be given a factory method delegate from which they obtain a reference. When the dependent class goes out of scope, normally the instance would as well.

A possible answer is to override Finalize(), clean up any mutable state portion of the instance, and then as long as the Factory is in scope, reattach the instance to some member of the Factory. This allows the garbage-collection process to, in effect, "recycle" the valuable portion of these objects when they would otherwise go out of scope and be totally destroyed. The Factory can look and see if it has any recycled objects available in it's "bin", and if so, it can polish it up and hand it out. The factory would only have to instantiate a new copy of the object if the number of total objects in use by the process increased.

Other possible uses may include some highly specialized logger or audit implementation, where objects you wish to process after their death will attach themselves to a work queue managed by this process. After the process handles them, they can be totally destroyed.

In general, if you want dependents to THINK they're getting rid of an object, or to not have to bother, but you want to keep the instance, resurrection may be a good tool, but you'll have to watch it VERY carefully to avoid situations in which objects receiving resurrected references become "pack rats" and keep every instance that has ever been created in memory for the lifetime of the process.

KeithS
+2  A: 

Speculative: In a Pool situation, like the ConnectionPool.

You might use it to reclaim objects that were not properly disposed but to which the application code no longer holds a reference. You can't keep them in a List in the Pool because that would block GC collection.

Henk Holterman
Yes, the first idea was pooled object with inner couter of times to be ressurected, for example. When the counter is down to 0 last Finalization must be suppressed and object is dying. But anyway i think it is no good implementation of pooling anyway.
Vokinneberg
@Vokin, the counter strategy is not the only way to manage life-time here. I think the main point is reclaiming a resource from the GC.
Henk Holterman
+1  A: 

A brother of mine worked on a high-performance simulation platform once. He related to me how that in the application, object construction was a demonstrable bottleneck to the application performance. It would seem the objects were large and required some significant processing to initialize.

They implemented an object repository to contain "retired" object instances. Before constructing a new object they would first check to see if one already existed in the repository.

The trade-off was increased memory consumption (as there might exist many unused objects at a time) for increased performance (as the total number of object constructions were reduced).

Note that the decision to implement this pattern was based on the bottlenecks they observed through profiling in their specific scenario. I would expect this to be an exceptional circumstance.

kbrimington