+14  A: 

No, your example wouldn't work for three reasons:

  • Classes (such as List<T>) are invariant; only delegates and interfaces are variant
  • For variance to work, the interface has to only use the type parameter in one direction (in for contravariance, out for covariance)
  • Value types aren't supported as type arguments for variance - so there's no converstion from IEnumerable<int> to IEnumerable<object> for example

(The code fails to compile in both C# 3.0 and 4.0 - there's no exception.)

So this would work:

IEnumerable<string> strings = new List<string>();
IEnumerable<object> objects = strings;

The CLR just uses the reference, unchanged - no new objects are created. So if you called objects.GetType() you'd still get List<string>.

I believe it wasn't introduced earlier because the language designers still had to work out the details of how to expose it - it's been in the CLR since v2.

The benefits are the same as other times where you want to be able to use one type as another. To use the same example I used last Saturday, if you've got something implements IComparer<Shape> to compare shapes by area, it's crazy that you can't use that to sort a List<Circle> - if it can compare any two shapes, it can certainly compare any two circles. As of C# 4, there'd be a contravariant conversion from IComparer<Shape> to IComparer<Circle> so you could call circles.Sort(areaComparer).

Jon Skeet
Ah, noted. I'll have to download and take a look at the examples from last Saturday and have a play around with it myself. The concept itself makes sense, for sure - just attempting to get my head around the ideology of using the concept in real-world situations. Many thanks for the response.
Daniel May
@Daniel: No problem - sorry that I clearly didn't explain it well enough on Saturday :) (There was a lot to cover, admittedly...)
Jon Skeet
@Jon: Oh it's not that at all Jon - it was all moving a little fast and I haven't been exposed to, well, any of the new features of C# 4 - I was scribbling notes like a madman. Looks like I'll have to order the second edition of C# in Depth :)
Daniel May
@Daniel: Well I'm not going to say no to that... but hopefully the video of the session should be up soon, which may help.
Jon Skeet
+5  A: 

Out of interest, why wasn't this introduced in previous versions

The first versions (1.x) of .NET didn't have generics at all, so generic variance was far off.

It should be noted that in all versions of .NET, there is array covariance. Unfortunately, it's unsafe covariance:

Apple[] apples = new [] { apple1, apple2 };
Fruit[] fruit = apples;
fruit[1] = new Orange(); // Oh snap! Runtime exception! Can't store an orange in an array of apples!

The co- and contra-variance in C# 4 is safe, and prevents this problem.

what's the main benefit - ie real world usage?

Many times in code, you are calling an API expects an amplified type of Base (e.g. IEnumerable<Base>) but all you've got is an amplified type of Derived (e.g. IEnumerable<Derived>).

In C# 2 and C# 3, you'd need to manually convert to IEnumerable<Base>, even though it should "just work". Co- and contra-variance makes it "just work".

p.s. Totally sucks that Skeet's answer is eating all my rep points. Damn you, Skeet! :-) Looks like he's answered this before, though.

Judah Himango
Just to be clear: Co- and contravariance has *always* been supported in the CLI (where "always" means "at least since v2"). It just wasn't *exposed* in C# until C# 4.0. But, e.g. Eiffel.NET has always supported it, although AFAIK the libraries weren't properly annotated either. (Don't know why, actually. It wouldn't have been *too* hard to create an IL rewriter tool which just takes in a list of co- and contravariant interfaces and flips the right bits in the metadata, even if you could in the past not express this in C#, which the BCL is written in.)
Jörg W Mittag
+9  A: 

A few additional thoughts.

What does the CLR see when this code is executed

As Jon and others have correctly noted, we are not doing variance on classes, only interfaces and delegates. So in your example, the CLR sees nothing; that code doesn't compile. If you force it to compile by inserting enough casts, it crashes at runtime with a bad cast exception.

Now, it's still a reasonable question to ask how variance works behind the scenes when it does work. The answer is: the reason we are restricting this to reference type arguments that parameterize interface and delegate types is so that nothing happens behind the scenes. When you say

object x = "hello";

what happens behind the scenes is the reference to the string is stuck into the variable of type object without modification. The bits that make up a reference to a string are legal bits to be a reference to an object, so nothing needs to happen here. The CLR simply stops thinking of those bits as referring to a string and starts thinking of them as referring to an object.

When you say:

IEnumerator<string> e1 = whatever;
IEnumerator<object> e2 = e1;

Same thing. Nothing happens. The bits that make a ref to a string enumerator are the same as the bits that make a reference to an object enumerator. There is somewhat more magic that comes into play when you do a cast, say:

IEnumerator<string> e1 = whatever;
IEnumerator<object> e2 = (IEnumerator<object>)(object)e1;

Now the CLR must generate a check that e1 actually does implement that interface, and that check has to be smart about recognizing variance.

But the reason we can get away with variant interfaces being just no-op conversions is because regular assignment compatibility is that way. What are you going to use e2 for?

object z = e2.Current;

That returns bits that are a reference to a string. We've already established that those are compatible with object without change.

Why wasn't this introduced earlier? We had other features to do and a limited budget.

What's the principle benefit? That conversions from sequence of string to sequence of object "just work".

Eric Lippert

related questions