I try to construct a big Int64 with nibble information stored in bytes.
The following lines of code work as expected:
Console.WriteLine("{0:X12}", (Int64)(0x0d * 0x100000000));
Console.WriteLine("{0:X12}", (Int64)(0x0d * 0x1000000));
Console.WriteLine("{0:X12}", (Int64)(0x0d * 0x100000));
Why does the following line lead to a compile error CS0220 "The operation overflows at compile time in checked mode" and the others do not?
Console.WriteLine("{0:X12}", (Int64)(0x0d * 0x10000000));
The result would be:
FFFFFFFFD0000000
instead of:
0000D0000000
Can anyone explain this? I will now convert with shift operators, but still curious why this approach does not work!
Update: The error also occurs when using (Int64)(0x0d << 28).