From MSDN:
The null keyword is a literal that
represents a null reference, one that
does not refer to any object
To try and make it a little clearer in the older versions of c#, a value type couldn't be null, ie, it HAD to have a value, if you didn't assign one, you got a potentially random value so something like:
int i;
i++;
console.writeline(i);
in the older versions of c# objects had to be initialized otherwise they were null, which meant they had no reference to any object.
Now with nullable value types in c# 2.0+ you can have a nullable int which means that if you have this code:
int? i;
i++;
console.writeline(i);
you will actually get an exception at i++ because i has never been initialized to anything other than null. If null was 0, this code would execute fine because it'd just evaluate to 0+1, however this is incorrect behaviour.
If null was always 0 and you had a a nullable int, and you wrote some code like:
int? i;
if (int == 0)
{
//do something
}
there is a very real possibility you could get some unexpected behaviour IF null was the same as 0 because there is no way the compiler could differentiate between the int being null and the int being explicitly set to 0.
Another example that clarifies things in my mind:
public int? AddNumbers(int? x, int? y)
{
if (x == null || y == null)
return null;
if (x == 0)
return y;
if (y == 0)
return x;
return x + y;
}
in this example, it's clear that null and 0 are very different because if you were to pass in 0 for x or y, and null was equal to 0, then the code above would never get to the checks for x == 0 or y == 0, however if you run the code and pass in 0 for x or y, the checks do get executed.