views:

351

answers:

1

There's a standard pattern for events in .NET - they use a delegate type that takes a plain object called sender and then the actual "payload" in a second parameter, which should be derived from EventArgs.

The rationale for the second parameter being derived from EventArgs seems pretty clear (see the .NET Framework Standard Library Annotated Reference). It is intended to ensure binary compatibility between event sinks and sources as the software evolves. For every event, even if it only has one argument, we derive a custom event arguments class that has a single property containing that argument, so that way we retain the ability to add more properties to the payload in future versions without breaking existing client code. Very important in an ecosystem of independently-developed components.

But I find that the same goes for zero arguments. This means that if I have an event that has no arguments in my first version, and I write:

public event EventHandler Click;

... then I'm doing it wrong. If I change the delegate type in the future to a new class as its payload:

public class ClickEventArgs : EventArgs { ...

... I will break binary compatibility with my clients. The client ends up bound to a specific overload of an internal method add_Click that takes EventHandler, and if I change the delegate type then they can't find that overload, so there's a MissingMethodException.

Okay, so what if I use the handy generic version?

public EventHandler<EventArgs> Click;

No, still wrong, because an EventHandler<ClickEventArgs> is not an EventHandler<EventArgs>.

So to get the benefit of EventArgs, you have to derive from it, rather than using it directly as is. If you don't, you may as well not be using it (it seems to me).

Then there's the first argument, sender. It seems to me like a recipe for unholy coupling. An event firing is essentially a function call. Should the function, generally speaking, have the ability to dig back through the stack and find out who the caller was, and adjust its behaviour accordingly? Should we mandate that interfaces should look like this?

public interface IFoo
{
    void Bar(object caller, int actualArg1, ...);
}

After all, the implementor of Bar might want to know who the caller was, so they can query for additional information! I hope you're puking by now. Why should it be any different for events?

So even if I am prepared to take the pain of making a standalone EventArgs-derived class for every event I declare, just to make it worth my while using EventArgs at all, I definitely would prefer to drop the object sender argument.

Visual Studio's autocompletion feature doesn't seem to care what delegate you use for an event - you can type += [hit Space, Return] and it writes a handler method for you that matches whatever delegate it happens to be.

So what value would I lose by deviating from the standard pattern?

As a bonus question, will C#/CLR 4.0 do anything to change this, perhaps via contravariance in delegates? I attempted to investigate this but hit another problem. I originally included this aspect of the question in that other question, but it caused confusion there. And it seems a bit much to split this up into a total of three questions...

Update:

Turns out I was right to wonder about the effects of contravariance on this whole issue!

As noted elsewhere, the new compiler rules leave a hole in the type system that blows up at runtime. The hole has effectively been plugged by defining EventHandler<T> differently to Action<T>.

So for events, to avoid that type hole you should not use Action<T>. That doesn't mean you have to use EventHandler<TEventArgs>; it just means that if you use a generic delegate type, don't pick one that is enabled for contravariance.

+5  A: 

Nothing, you loose nothing. I've been using Action<> since .NET 3.5 came out and it is far more natural and easier to program against.

I don't even deal with the EventHandler type for generated event handlers anymore, simply write the method signature you want and wire it up with a lambda:

btnCompleteOrder.OnClick += (o,e) => _presenter.CompleteOrder();
George Mauer
Don't you have problem unsubscribing to the event with this anonymous function?
Matt
I suppose you would, but you shouldn't really be unsubscribing to an OnClick event since that merely is meant to notify your presenter that something happened. If you're subscribing/unsubscribing you probably care more about events raised by non-ui classes, in that case you have full control of event type and can make it an Action<> anyways so anonymous functions aren't necessary.
George Mauer
@Matt - I write the majority of my event enlisting that way. The need to explicitly unsubscribe is pretty unusual. Essentially if the event source is a longer-lived object (e.g. stored in a static, or the main window of the application) but if the source and sink have the same lifetime, why bother? This is what the GC is for.
Daniel Earwicker
I agree, it is a very nice syntax, and have used it on occasion myself. But there are occasions when you need to unsubscribe, so I felt compelled to point it out. It is very easy, especially for someone new to event handlers, to forget that they are a reference like anything else, and thus can keep objects alive longer than they should be
Matt
I agree Matt, and it is certainly a problem if someone's wiring their UI up haphazardly. But if you're doing it even halfway correctly you're going to have some sort of Presenter/Controller/Mediator class that has the same lifetime as your UI which 99% of the time should be the ONLY thing wired up to your UI.
George Mauer
Not seeing any advantage to the .NET standard I stopped bothering with the (object sender, EventArgs e) pattern a while ago and turned the warning off in FxCop.
Robert Davis