Thanks for all the shout-outs, guys.
Jon and Rasmus's answers are fine, I would just add a quick technical note.
When speaking casually and informally, yes, people use "covariance" and "contravariance" to refer to a specific kind of polymorphism. That is, the kind of polymorphism where you treat a sequence of spiders as though it were a sequence of animals.
Were we to get all computer-sciency and try to make more technical definitions, then I probably would not say that covariance and contravariance are "a kind of polymorphism". I would approach a more technical definition like this:
First, I'd note that there are two possible kinds of polymorphism in C# that you might be talking about, and it is important to not confuse them.
The first kind is traditionally called "ad hoc polymorphism", and that's the polymorphism where you have a method M(Animal x), and you pass spiders and giraffes and wallabies to it, and the method uniformly treats its passed-in arguments the same way by using the commonalities guaranteed by the Animal base class.
The second kind is traditionally called "parametric polymorphism", or "generic polymorphism". That's the ability to make a generic method M<T
>(T t) and then have a bunch of code in the method that again, treats the argument uniformly based on commonalities guaranteed by the constraints on T.
I think you're talking about the first kind of polymorphism. But my point is just that we can define polymorphism as the ability of a programming language to treat different things uniformly based on a known commonality. (For example, a known base type, or known implemented interface.)
Covariance and contravariance is the ability of a programming language to take advantage of commonalities between generic types deduced from known commonalities of their type arguments.