views:

306

answers:

2

I'm witnessing a strange behavior in a .net program :

Console.WriteLine(Int64.MaxValue.ToString());
// displays 9223372036854775807, which is 2^63-1, as expected

Int64 a = 256*256*256*127; // ok

Int64 a = 256*256*256*128; // compile time error : 
//"The operation overflows at compile time in checked mode"
// If i do this at runtime, I get some negative values, so the overflow indeed happens.

Why do my Int64's behaves as if they were Int32's, although Int64.MaxValue seems to confirm they're using 64 bits ?

If it's relevant, I'm using a 32 bit OS, and the target platform is set to "Any CPU"

+19  A: 

Your RHS is only using Int32 values, so the whole operation is performed using Int32 arithmetic, then the Int32 result is promoted to a long.

Change it to this:

Int64 a = 256*256*256*128L;

and all will be well.

Jon Skeet
Dang you Jon Skeet! You beat me by about 10 seconds! :)
John Kraft
I feel stupid :) Thanks !
Brann
You're not stupid, it's just a bit unintuitive at first. I was bitten by then when I expected that "double res = someInt / otherInt" does a floating point division (which it doesn't) and learned that the left hand side does not matter.
Michael Stum
@Michael : no, I feel stupid because I knew it. I'm pretty sure I would have spotted the problem immediately, had it happen to someone else ! It's strange how a fresh look can change everything ! In fact, the code I posted is a simplification of the real situation I encountered, in which the problem was harder to spot. Simplifying it as I did made it obvious to every eyes but mine, because my mind was biased with the "real" problem I was facing.
Brann
+4  A: 

Use:

Int64 a = 256l*256l*256l*128l;

the l suffix means Int64 literal, no suffix means Int32.

What your wrote:

Int64 a = 256*256*256*128

means:

Int64 a = (Int32)256*(Int32)256*(Int32)256*(Int32)128;
Pop Catalin