I'm witnessing a strange behavior in a .net program :
Console.WriteLine(Int64.MaxValue.ToString());
// displays 9223372036854775807, which is 2^63-1, as expected
Int64 a = 256*256*256*127; // ok
Int64 a = 256*256*256*128; // compile time error :
//"The operation overflows at compile time in checked mode"
// If i do this at runtime, I get some negative values, so the overflow indeed happens.
Why do my Int64's behaves as if they were Int32's, although Int64.MaxValue seems to confirm they're using 64 bits ?
If it's relevant, I'm using a 32 bit OS, and the target platform is set to "Any CPU"