a) Compiles
Func<string, bool> f1 = (Func<object, bool>)null;
b) Not
Func<int, bool> f2 = (Func<object, bool>)null;
Why value types are special here? Is contravariance broken with value types?
a) Compiles
Func<string, bool> f1 = (Func<object, bool>)null;
b) Not
Func<int, bool> f2 = (Func<object, bool>)null;
Why value types are special here? Is contravariance broken with value types?
Generic variance only works with reference types, yes. (This is so that the CLR knows that everything's still just a reference, so the JITted code is still the same... the bits involved in a reference are the same whatever type you're talking about, whereas treating int
as object
requires a boxing conversion. Basically you can keep representational identity with reference types).
From the C# 4 spec, section 13.1.3.2:
A type
T<A1, …, An>
is variance-convertible to a typeT<B1, …, Bn>
if T is either an interface or a delegate type declared with the variant type parametersT<X1, …, Xn>
, and for each variant type parameter Xi one of the following holds:
- Xi is covariant and an implicit reference or identity conversion exists from Ai to Bi
- Xi is contravariant and an implicit reference or identity conversion exists from Bi to Ai
- Xi is invariant and an identity conversion exists from Ai to Bi
It's the "implicit reference conversion" rather than just "implicit conversion" bit which is a problem for value types.
For much more detail around generic variance, see Eric Lippert's blog series on the topic.