views:

143

answers:

2

I was looking at F# doc on bitwise ops:

Bitwise right-shift operator. The result is the first operand with bits shifted right by the number of bits in the second operand. Bits shifted off the least significant position are not rotated into the most significant position. For unsigned types, the most significant bits are padded with zeros. For signed types, the most significant bits are padded with ones. The type of the second argument is int32.

What was the motivation behind this design choice comparing to C++ language (and probably C too) where MSB are padded with zeros? E.g:

int mask = -2147483648 >> 1; // C++ code

where -2147483648 =

10000000 00000000 00000000 00000000

and mask is equal to 1073741824

where 1073741824 =

01000000 00000000 00000000 00000000

Now if you write same code in F# (or C#), this will indeed pad MSB with ones and you'll get -1073741824.

where -1073741824 =

11000000 00000000 00000000 00000000
+2  A: 

To answer the reformed question (in the comments):

The C and C++ standards do not define the result of right-shifting a negative value (it's either implementation-defined, or undefined, I can't remember which).

This is because the standard was defined to reflect the lowest common denominator in terms of underlying instruction set. Enforcing a true arithmetic shift, for instance, takes several instructions if the instruction set doesn't contain an asr primitive. This is further complicated by the fact that the standard mandates either one's or two's complement representation.

Oli Charlesworth
+4  A: 

The signed shift has the nice property that shifting x right by n corresponds to floor(x/2n).

On .NET, there are CIL opcodes for both types of operations (shr to do a signed shift and shr.un to do an unsigned shift). F# and C# choose which opcode to use based on the signedness of the type which is being shifted. This means that if you want the other behavior, you just need to perform a numeric conversion before and after shifting (which actually has no runtime impact due to how numbers are stored on the CLR - an int32 on the stack is indistinguishable from a uint32).

kvb
I don't know what the semantics are on .NET, but in a typical C implementation, there's a sequence of pathological cases where the shift-divide equivalence doesn't hold. For instance, typically: `(-1 >> 1) != (-1 / 2)`.
Oli Charlesworth
@Oli - Yes, that's true in .NET as well... I'll edit to clarify.
kvb
@Oli, strictly both `-1 >> 1` and `-1 / 2` correctly divide by zero. They do round differently though, the shift doing statistical rounding and the integer division doing primary-school rounding. Still leads to errors where equivalence is assumed, but it is still a shift-divide equivalence.
Jon Hanna
Jon, I really doubt you get any operation in any language to "correctly divide by zero"... :)
Alexander Rautenberg
Heh. Yes, that was an amusing typo.
Jon Hanna