tags:

views:

268

answers:

4

I was making an RAII class that takes in a System.Windows.Form control, and sets its cursor. And in the destructor it sets the cursor back to what it was.

But is this a bad idea? Can I safely rely that the destructor will be called when objects of this class go out of scope?

+10  A: 

This is a very, very bad idea.

Finalizers are not called when variables go out of scope. They're called at some point before the object is garbage collected, which could be a long time later.

Instead, you want to implement IDisposable, and then callers can use:

using (YourClass yc = new YourClass())
{
    // Use yc in here
}

That will call Dispose automatically.

Finalizers are very rarely necessary in C# - they're only required when you directly own an unmanaged resource (e.g. a Windows handle). Otherwise, you typically have some managed wrapper class (e.g. FileStream) which will have a finalizer if it needs one.

Note that you only need any of this when you have resources to be cleaned up - most classes in .NET don't implement IDisposable.

EDIT: just to respond to the comment about nesting, I agree it can be a bit ugly, but you shouldn't need using statements very often in my experience. You can also nest usings vertically like this, in the case where you've got two directly adjacent ones:

using (TextReader reader = File.OpenText("file1.txt"))
using (TextWriter writer = File.CreateText("file2.txt"))
{
    ...
}
Jon Skeet
Is using 'using' enough to guarantee that the destructor will be called as @JaredPar suggested below? If it does guarantee it, why would I implement IDispose instead?
Net Citizen
by IDispose I meant to say IDisposable
Net Citizen
@Net Citizen - no, it won't call the destructor/finalizer. It will just call IDisposable. It doesn't affect finalization (unless you call GC.SuppressFinalize in Dispose of course!)
Jon Skeet
Thanks, I wish there was a cleaner syntax for 'using'. It litters my code with extra nesting :(
Net Citizen
+1 for the adjacent using statement tip. Did not know about that.
Tim S. Van Haren
Re: the finalizer: usually the pattern is to have the disposer and the finalizer do the same thing, with the disposer suppressing the finalizer. You can find a standard implementation of this pattern to copy from in any number of places.
Eric Lippert
Eric's absolutely right (of course) but I would stress that it's *very* rare to require your own finalizer in .NET these days. Look at the `SafeHandle` class to work around most requirements. If you don't need a finalizer and your class is sealed, the pattern becomes a lot simpler :)
Jon Skeet
+1  A: 

Use the IDisposable pattern if you want to do RAII-like things in C#

Paul Betts
+4  A: 

You know, a lot of smart people say "use IDisposable if you want to implement RAII in C#", and I just don't buy it. I know I'm in the minority here, but when I see "using(blah) { foo(blah); }" I automatically think "blah contains an unmanaged resource that needs to be cleaned up aggressively as soon as foo is done (or throws) so that someone else can use that resource". I do not think "blah contains nothing interesting but there's some semantically important mutation that needs to happen, and we're going to represent that semantically important operation by the character '}' " -- some mutation like some stack has to be popped or some flag has to be reset or whatever.

I say that if you have a semantically important operation that has to be done when something completes, we have a word for that and the word is "finally". If the operation is important, then it should be represented as a statement that you can see right there and put a breakpoint on, not an invisible side effect of a right-curly-brace.

So let's think about your particular operation. You want to represent:

var oldPosition = form.CursorPosition;
form.CursorPosition = newPosition;
blah;
blah;
blah;
form.CursorPosition = oldPosition;

That code is perfectly clear. All the side effects are right there, visible to the maintenance programmer who wants to understand what your code is doing.

Now you have a decision point. What if blah throws? If blah throws then something unexpected happened. You now have no idea whatsoever what state "form" is in; it might have been code in "form" that threw. It could have been halfway through some mutation and is now in some completely crazy state.

Given that situation, what do you want to do?

1) Punt the problem to someone else. Do nothing. Hope that someone else up the call stack knows how to deal with this situation. Reason that the form is already in a bad state; the fact that its cursor is not in the right place is the least of its worries. Don't poke at something that is already so fragile, it's reported an exception once.

2) Reset the cursor in a finally block and then report the problem to someone else. Hope -- with no evidence whatsoever that your hope will be realized -- that resetting the cursor on a form that you know is in a fragile state will not itself throw an exception. Because, what happens in that case? The original exception, which perhaps someone knew how to handle, is lost. You've destroyed information about the original cause of the problem, information which might have been useful. And you've replaced it with some other information about a cursor-moving failure which is of use to no one.

3) Write complex logic that handles the problems of (2) -- catch the exception, and then attempt to reset the cursor position in a separate try-catch-finally block which suppresses the new exception and re-throws the original exception. This can be tricky to get right, but you can do it.

Frankly, my belief is that the correct answer is almost always (1). If something horrid went wrong, then you cannot safely reason about what further mutations to the fragile state are legal. If you don't know how to handle the exception then GIVE UP.

(2) is the solution that RAII with a using block gives you. Again, the reason we have using blocks in the first place is to aggressively clean up vital resources when they can no longer be used. Whether an exception happens or not, those resources need to be cleaned up rapidly, which is why "using" blocks have "finally" semantics. But "finally" semantics are not necessarily appropriate for operations which are not resource-cleanup operations; as we've seen, program-semantics-laden finally blocks implicitly make the assumption that it is always safe to do the cleanup; but the fact that we're in an exception handling situation indicates that it might not be safe.

(3) sounds like a lot of work.

So in short, I say stop trying to be so smart. You want to mutate the cursor, do some work, and unmutate the cursor. So write three lines of code that do that, done. No fancy-pants RAII is necessary; it just adds unnecessary indirection, it makes it harder to read the program, and it makes it potentially more brittle in exceptional situations, not less brittle.

Eric Lippert
`finally` is clutter. What people want is a concise way of expressing RAII. The design of C#/.NET simply didn’t factor in the enormous usefulness of RAII, instead trying to provide special constructs for special cases (`lock`, `using`). For that reason people try not to depend on RAII in .NET but sometimes it just is the most natural way of expressing a paired operation.
Konrad Rudolph
But RAII is frequently *wrong* when it is used to manage semantic operations rather than resource cleanup. The assumption that the semantic mutation in the destructor is always the right thing to do even in an exceptional situation is not warranted; in an exceptional situation *if you don't know what the right thing to do is, don't try*. You'll just make it worse.
Eric Lippert
I agree with @Konrad. Also, I find `finally` pretty unnatural. If you have a return statement inside your try that exits out of your function, most people expect the code to return right away. But it will instead jump to the finally and then return. Likewise if you are looking at a finally block, you expect the next line to execute outside of the finally, but in the above described scenario it can sometimes return as well. This is like goto but with hidden code.
Brian R. Bondy
And of course we have this problem with "lock", as I've written about in the past. "lock" automatically releases the lock upon an exception. You're trading elimination of deadlocks for allowing access to now-unlocked inconsistent state! The exception that caused the unlock wasn't thrown because everything is awesome in the object you were mutating under the lock, believe me. But apparently most people would rather have crazy unpredictable logically inconsistent crashing programs than programs which deadlock when a mutation under a lock fails catastrophically.
Eric Lippert
@Konrad - check my post. The `CursorApplier` type is long, but so is any RIAA type. The using statement on the other hand is *vastly* cleaner than RIAA.
280Z28
@Eric: You say that RAII is wrong for everything but resource cleanup (with a very narrow-minded definition of “resource”) because you adhere to the label of “RAII” too literally. RAII is a horrible misnomer, and just provides a wonderfully reliable and terse way of execution operations in a specific order. You are of course right insofar as `using` and `IDisposable` has been given a specific purpose in .NET but I believe that this restrictive constraint is completely unnecessary.
Konrad Rudolph
"So write three lines of code that do that, done. No fancy-pants RAII is necessary;".... but with RAII you are guaranteeing that you can't forget to write this cleanup code. And it has less code duplication. If you want to do extra cleanup as well you can add it at once place and have it affect everywhere using your RAII resource. You don't need to modify 600 places in code like the way you are suggesting
Brian R. Bondy
One of the purposes of RAII is to provide an automatic way to, effectively, change state upon completion or exception conditions. Resource allocation is, in effect a state as well as other kinds of states (cursor state, in the example). By not using RAII, you're forcing the method to handle state by itself, even though you may have created a nice way to manage state automatically.
Mystere Man
Resource allocation is not a SEMANTIC state of the program; the fact that a resource needs to be deallocated in a timely manner is an exogenous consideration; most of us do not write programs where that's the only thing the operating system is dealing with. Resource deallocation is just POLITENESS; telling the OS "I'm done, someone else can use this now". If you forget to do it in a timely manner the worst that happens is someone else can't get their work done. RAII for *semantic* mutations that are necessary to maintain *internal* program invariants is *completely* different.
Eric Lippert
I disagree. When you allocate a resource, you're changing the state of the program. You have entered a "resource allocated state". However, even if we accept your position, it's just a semantic argument. So if I create a new pattern called SCII (State change is initialization) that changes state on initialization and cleans it up on destruction, it's identical to RAII in all but name.
Mystere Man
Of course you're changing the state of the program; but that state change is a part of the *mechanism* of the program, not part of the *semantics*. The problem with non-GC'd languages is that they make you mix up the code which keeps the mechanisms happy with the code that implements the business logic of the program; it violates separation of concerns. The GC solves this problem by hiding the mechanism and letting you concentrate on writing your semantics.
Eric Lippert
The problem with state-change RAII is that it does the opposite: it takes a semantic concern and hides it in the resource allocation/deallocation mechanisms, exactly where it ought not to be. Those mechanisms have a purpose: to smoothly handle resource management so that you can get on with the business logic of your program. Use them for their intended purpose; don't mix the mechanism logic with the business logic, and don't hide the business logic in the allocation mechanisms.
Eric Lippert
+2  A: 

Edit: Clearly Eric and I are at a disagreement on this usage. :o

You can use something like this:

public sealed class CursorApplier : IDisposable
{
    private Control _control;
    private Cursor _previous;

    public CursorApplier(Control control, Cursor cursor)
    {
        _control = control;
        _previous = _control.Cursor;
        ApplyCursor(cursor);
    }

    public void Dispose()
    {
        ApplyCursor(_previous);
    }

    private void ApplyCursor(Cursor cursor)
    {
        if (_control.Disposing || _control.IsDisposed)
            return;

        if (_control.InvokeRequired)
            _control.Invoke(new MethodInvoker(() => _control.Cursor = cursor));
        else
            _control.Cursor = cursor;
    }
}

// then...

using (new CursorApplier(control, Cursors.WaitCursor))
{
    // some work here
}
280Z28
From the comments on Eric’s reply: “The using statement on the other hand is vastly cleaner than RIAA” – I disagree. While `using` certainly is more *explicit*, RAII is such an established idiom in C++ or (say) VB6 that its usage has become an expectation instead of an unexpected hidden action. Provided the object’s destructor doesn’t violate the principle of least surprise by doing something completely inappropriate, undocumented and unpredictable, its usage is every bit as clear and clean as `using`, and less verbose (since *every* block is an implicit `using` over its local stack variables)
Konrad Rudolph