views:

193

answers:

5

Many years ago, I was admonished to, whenever possible, release resources in reverse order to how they were allocated. That is:

block1 = malloc( ... );
block2 = malloc( ... );

... do stuff ...

free( block2 );
free( block1 );

I imagine on a 640K MS-DOS machine, this could minimize heap fragmentation. Is there any practical advantage to doing this in a C# /.NET application, or is this a habit that has outlived its relevance?

+3  A: 

Don't bother. The GarbageCollector reserves the right to defragment and move objects on the heap, so there's no telling what order things are in.

In addition, if you're disposing A and B and A references B it shouldn't matter if A disposes B when you dispose A, since the Dispose method should be callable more than once without an exception being thrown.

Will
True, provided you don't use a "disposed" reference by accident (through another object created from it), since you dispose in arbitrary order.
Reed Copsey
A: 

Nested 'usings' shows you the 'outlived' is not really on, and rarely is (not to go and say never after 40 years of evidence).. And that includes the stack-based VM that runs on say CMOS.

[ Despite some attempts by MSDN.com and Duffius to make it vanish, you know manage it all for you the difference between the heap and stack. What a smart idea.. in space ]

rama-jka toti
+3  A: 

If your resources are created well, this shouldn't matter (much).

However, many poorly created libraries don't do proper checking. Disposing of resources in reverse of their allocation typically means that you're disposing of resource dependent on other resources first - which can prevent poorly written libraries from causing problems. (You never dispose of a resource, then use one that's depending on the first's existence in this case.)

It also is good practice, since you're not going to accidentally dispose a resource required by some other object too early.

Here's an example: look at a database operation. You don't want to close/dispose your connection before closing/disposing your command (which uses the connection).

Reed Copsey
Created well is no good. It has to be order-dependent, release dependent and known in many circumstances. It couldn't matter more in all implementations of databases, transactions, and anything that runs on a stack (most software out there). Locks are another example and there's stacks of non-external and non-poor libraries using it. File ops and their locks are another. Event leakage another. Any unmanaged resource depending on yet another. Creation and destruction go hand in hand, and the idiom cannot be brokenly taken as Resource-Initialization-Is-"well-Creation."
rama-jka toti
So that WC in RIIWC oxymoron is replaced with Aquisition, which implies a Release btw. And since memory and large number of resources are mostly abstracted, ops, there goes the idea... and hacks of all kinds ensue. Put short, it's simply nature of the problem and it matters a lot.
rama-jka toti
And while I'm not defending order-dependence here, the correct observation is that it is hugely relevant but rarely desirable. But it is something even VM's official specifications are extremelly restricted to. Java impl especially and CLR to a lesser but still significant extent.It is a hack not to break large bodies of working code and assumptions made, a concious decision by compiler and jit backend designers. The code that is capable of order-independent processing lends itself to a huge array of possibilities but can be infeasible for plenty of scenarios.
rama-jka toti
+1  A: 

If you are referring to the time the destructor on the objects gets called, then that's up the garbage collector, the programming can have very little influence over that, and it is explicity non-deterministic according to the language definition.

If you are referring to calling IDisposable.Dispose(), then that depends on the behavior of the objects that implement the IDisposable interface.

In general, the order doesn't matter for most Framework objects, except to the extent that it matters to the calling code. But if object A maintains a dependency on object B, and object B is disposed, then it could very well be important not to do certain things with object A.

In most cases, Dispose() is not called directly, but rather it is called implicitly as part of a using or foreach statement, in which case the reverse-order pattern will naturally emerge, according to the statement embedding.

using(Foo foo = new Foo())
using(FooDoodler fooDoodler = new FooDoodler(foo))
{
  // do stuff
  // ...
  // fooDoodler automatically gets disposed before foo at the end of the using statement.
}
Jeffrey L Whitledge
A: 

"The runtime doesn't make any guarantees as to the order in which Finalize methods are called. For example, let's say there is an object that contains a pointer to an inner object. The garbage collector has detected that both objects are garbage. Furthermore, say that the inner object's Finalize method gets called first. Now, the outer object's Finalize method is allowed to access the inner object and call methods on it, but the inner object has been finalized and the results may be unpredictable. For this reason, it is strongly recommended that Finalize methods not access any inner, member objects."

http://msdn.microsoft.com/en-us/magazine/bb985010.aspx

So you can worry about your LIFO dispose semantics as much as you like, but if you leak one, the Dispose()'s are going to be called in whatever order the CLR fancies.

(This is more or less what Will said, above)

piers7