I'd have to check the spec for the question of what precisely is implementation dependent.
However, every implementation I've used in (mumble) years of embedded systems projects has been sensible:
Left shifts always shift in a 0 at the low bit. No other value makes sense.
Right shifts depend on the data type. A right shift of a signed integer duplicates the high bit as it shifts the rest to the right. This is called an "arithmetic shift", and has the nice property (in twos complement arithmetic, at least) that it divides the value by two while preserving the sign of the original number.
A right shift of an unsigned integer shifts a 0 into the high bit, and is usually known as a "logical shift".
It makes sense for an implementation to provide both kinds of shifts because both are useful, and using signed/unsigned to select which is meant is a sensible choice.
Edit: At least one thing that absolutely is implementation dependent is that the C standard does not (completely) specify the underlying implementation of integers and their storage. For instance, it is possible to build a compliant C compiler for a machine that uses one's complement arithmetic. It would also be possible (I think) to build a compliant compiler for a machine whose native storage was signed magnitude BCD. (Nope, I was wrong, see below.)
In practice, the world is pretty much settled on two's complement binary for the CPU and some of the pedantry is mooted.
So part of the question really is: how do you define the meaning of the << and >> operators in a way that is stable regardless of the underlying arithmetic system used.
IIRC, the definition of n<<1
is effectively n*2
, and n>>1
is effectively n/2
, with a natural extension to shifts by more than 1 (but not more than 31... there be undefined dragons there...) and with the notion that the >>
operator will preserve the sign if operating on a signed value.
Edit 2: Pete Kirkham points out in his fine answer that the C standard does specifically disallow the scary case of a BCD representation of integers, whether it is signed magnitude or ten's complement. I'm sure that is a good thing, even if Knuth did use a (optionally) BCD machine for his sample code in early editions of The Art of Computer Programming.
In those rare use cases where BCD is the right answer, then storing them in an unsigned long (8 digits ten's complement) or an unsigned 64-bit integer (room for 16 digits ten's complement or 15 digits plus sign and flags) and using a carefully crafted arithmetic library to manipulate them makes sense.
In practice, of course, C implementations map the operators as directly as allowed by the standard to the native machine instructions of the CPU. The folk who wrote the standard were very mindful of the existence of of many ways to implement even simple things like the representation of an integral value, and the C standard reflects that by allowing just enough implementation defined behavior to let operators be efficiently implemented in each machine.
The alternative leads swiftly to a world where all math operations are completely specified, and cannot be efficiently implemented on any machine.