int a = 1 << 32;
int b = 1 << 31 << 1;
Why does a == 1? b is 0 as I expected.
int a = 1 << 32;
int b = 1 << 31 << 1;
Why does a == 1? b is 0 as I expected.
All shifts are done mod 32 for ints and mod 64 for longs.
From section 5.19 of the spec:
If the promoted type of the left-hand operand is
int
, only the five lowest-order bits of the right-hand operand are used as the shift distance. It is as if the right-hand operand were subjected to a bitwise logical AND operator & (§15.22.1) with the mask value 0x1f. The shift distance actually used is therefore always in the range 0 to 31, inclusive.If the promoted type of the left-hand operand is
long
, then only the six lowest-order bits of the right-hand operand are used as the shift distance. It is as if the right-hand operand were subjected to a bitwise logical AND operator & (§15.22.1) with the mask value 0x3f. The shift distance actually used is therefore always in the range 0 to 63, inclusive.
As for why the language was designed that way - I don't know, but C# has the same design decision. Here's what the annotated ECMA C# spec says:
C# deliberately keeps implementation-defined behaviors to a miinimum. They are accepted only when the performance impact of forcing uniform behavior would be excessive (such as for some floating-point precision issues). Hence, the size of each integral type is precisely specified, and the character set is fixed to Unicode.
For shift operations, too, uniform behavior is specified. It can be achieved using a single extra instruction (& 0x1F or & 0x3F) that incurs only a tiny cost on modern processors, especially since it does not reference memory. Unlike for floating-point operations, the difference in shift behavior would be dramatic if left to the whim of the processors; rather than a small difference in precision, completely different integral results would be produced.
In making this decision the committe studied reference materials for a number of different processor architectures. There is little consistency in the behavior for shift counts outside the range -32..+32 for 32-bit operands, and respectively -64..+64 for 64-bit operands.
(There's then a list of some examples.)
This seems an entirely reasonable explanation to me. Consistency is definitely important, and if it would be impossible to implement different consistent behaviour in a performant way on some systems, I think this is a reasonable solution.
There is some difference in how processors implement shift instructions.
For instance, IIRC, ARM processors (32-bit ISA) take the least significant byte of the shifting register. (Shifts are not actually standalone instructions on ARM).
So long as the underlying processor has a vaguely sensible way to shift, it's easier to clear all but the least significant bits (one instruction usually) than to check if the shift is large and branch (actually on the ARM this only typically adds one instruction because all instructions are conditional).
BTW: This, and other potential gotchas are mentioned in Java Puzzlers (Traps, Pitfalls, and Corner Cases)