I was working with bit shift operators (see my question Bit Array Equality) and a SO user pointed out a bug in my calculation of my shift operand--I was calculating a range of [1,32] instead of [0,31] for an int. (Hurrah for the SO community!)
In fixing the problem, I was surprised to find the following behavior:
-1 << 32 == -1
In fact, it would seem that n << s
is compiled (or interpreted by the CLR--I didn't check the IL) as n << s % bs(n)
where bs(n) = size, in bits, of n.
I would have expected:
-1 << 32 == 0
It would seem that the compiler is realizing that you are shifting beyond the size of the target and correcting your mistake.
This is purely an academic question, but does anyone know if this is defined in the spec (I could not find anything at 7.8 Shift operators), just a fortuitous fact of undefined behavior, or is there a case where this might produce a bug?