views:

470

answers:

6

Jon's Brain Teasers

Here Be Spoilers...

I'm looking at the answer to #1, and I must admit I never knew this was the case in overload resolution. But why is this the case. In my tiny mind Derived.Foo(int) seems like the logical route to go down.

What is the logic behind this design decision?

BONUS TIME!

Is this behaviour a result of the C# specification, the CLR implementation, or the Compiler?

A: 

the reason is: performance. calling a virtual method takes a bit more time. calling a delegate on a virtual method takes much more time and so on....

see: The cost of method calls

Jack30lena
Really? Whilst the info on call costs in very interesting and no doubt would have had influence on a number of decisions, I'm not sure I can see a direct link between the two problems. Yes they have opted for non-virtual as the default where not defined for method-call performance reasons, does this really dictate the overload resolution to such a great extent as to make them opt for what seems to me and a number of other people as "unintuitive"? I remain unconvinced that this is THE defining reason, but grateful for an interesting answer none the less. T
runrunraygun
This is absolutely not the reason.
Eric Lippert
+1  A: 

need to post the other link in another post... Versioning, Virtual, and Override

Jack30lena
Good article, I was surprised to find he was focused on developer intent. I liked the pragmatic approach to making methods non-virtual. Thanks for the link!
Audie
+1  A: 

Here is a possible explanation:

When the compiler links the method calls, the first place it looks in in the class that is lowest in the inheritance chain (in this case the Derived class). It's instance methods are checked and matched. The overridden method Foo is not an instance method of Derived, it is an instance method of the Base class.

The reason why could be performance, as Jack30lena proposed, but it could also be how the compiler interprets the coder's intention. It's a safe assumption that the developer's intended code behavior lies in the code at the bottom of the inheritance chain.

Audie
This is an interesing point, see BONUS TIME! :)
runrunraygun
By "lowest" you mean the thing that is farthest from the base class? Normally I'd describe the thing that was farthest from the base of a thing as being the "highest" thing. (Then again, the root of a tree is the highest node in the tree, and that makes no sense either...) That said, your analysis is correct; the developer of the derived class knows more than the developer of the base class, so their methods get priority.
Eric Lippert
I was thinking in tree terms. What's really interesting is why this type of hiding doesn't result in at least a warning. Essentially, any instance method with a more general parameter will effectively hide the more specific override. I know this code isn't completely unreachable (per Foxfire), but it's hidden nonetheless. Seems like it should produce a warning. Also, thanks for verifying my answer, your articles were helpful.
Audie
Suppose we produced a warning. *How would you turn the warning off if the behaviour was desired?* We try to reserve warnings for behaviours which are *highly likely to be wrong*, and *if that's what you want, there's a way to write the code so that the warning goes away*. This meets neither criterion; the behaviour is highly likely to be correct, and if it is, then there is no way to write the code to say "no, REALLY, I meant it, stop warning me". The result is that on many function calls you'd have pragmas around them to suppress the warning, which is ugly.
Eric Lippert
Good point. However, outright hiding of a method yields a warning (0114). The hiding in question compiles quietly and results in an invisibly hidden member - and likely a runtime bug. Why not at least warn to that level for this kind of hiding (and allow the new keyword to work the same)? I'm sure this kind of hiding is harder for the compiler to find, since it's not just signature matching, but also parameter base type discovery; however, hiding in this way on purpose seems a bit too clever to be good design, and I think hiding in this way is more likely to be by mistake than by design.
Audie
The reason we give a warning if the "new" isn't there is because that is indicative that the hiding was accidental. We don't want accidental hiding to be an *error* because that then is once more a brittle base class failure; you upgrade your base class and your derived class doesn't compile because the base class author added a member which you are now hiding. You turn off the warning by saying "yes, I meant it, I said 'new'".
Eric Lippert
A: 

The reason is because it is ambiguous. The compiler just has to decide for one. And somebody thought that the less indirect one would be better (performance might be a reason). If the developer just wrote:

((Base)d).Foo (i);

it's clear and giving you the expected result.

Foxfire
+1  A: 

It's a result of the compiler, we examined the IL code.

SLC
Yes, but it's like that because of the specification.
configurator
+6  A: 

This behaviour is deliberate and carefully designed. The reason is because this choice mitigates the impact of one form of the Brittle Base Class Failure.

Read my article on the subject for more details.

http://blogs.msdn.com/ericlippert/archive/2007/09/04/future-breaking-changes-part-three.aspx

Eric Lippert
I knew there would be a solid reason out there somewhere! So (for Bonus Time!) is this a specification of C# or CLR?
runrunraygun
@runrunraygun: The CLR doesn't have an overload resolution algorithm; overload resolution is a language concept. The CLR IL just has instructions that invoke whatever method reference is in a particular location. So this cannot possibly be a CLR specification. This behaviour is specified in the C# specification section 7.6.5.1, the point which begins "The set of candidate methods is reduced to contain only methods from the most derived types..."
Eric Lippert