Consider these definitions:
int x=5;
int y=-5;
unsigned int z=5;
How are they stored in memory? Can anybody explain the bit representation of these in memory?
Can int x=5
and int y=-5
have same bit representation in memory?
Consider these definitions:
int x=5;
int y=-5;
unsigned int z=5;
How are they stored in memory? Can anybody explain the bit representation of these in memory?
Can int x=5
and int y=-5
have same bit representation in memory?
Here is the very nice link which explains the storage of signed and unsigned INT in C -
http://answers.yahoo.com/question/index?qid=20090516032239AAzcX1O
Taken from this above article -
"process called two's complement is used to transform positive numbers into negative numbers. The side effect of this is that the most significant bit is used to tell the computer if the number is positive or negative. If the most significant bit is a 1, then the number is negative. If it's 0, the number is positive."
Assuming int is a 16 bit integer (which depends on the C implementation, most are 32 bit nowadays) the bit representation differs like the following:
5 = 0000000000000101
-5 = 1111111111111011
if binary 1111111111111011 would be set to an unsigned int, it would be decimal 65531.
The C standard specifies that unsigned numbers will be stored in binary. (With optional padding bits). Signed numbers can be stored in one of three formats: Magnitude and sign; two's complement or one's complement. Interestingly that rules out certain other representations like Excess-n or Base −2.
However on most machines and compilers store signed numbers in 2's complement.
int
is normally 16 or 32 bits. The standard says that int
should be whatever is most efficient for the underlying processor, as long as it is >= short
and <= long
then it is allowed by the standard.
On some machines and OSs history has causes int
not to be the best size for the current iteration of hardware however.
ISO C (C99) states the difference. The int
data type is signed and has a minimum range of at least -32767 through 32767 inclusive. The actual values are given in limits.h
as INT_MIN and INT_MAX respectively.
An unsigned int
has a minimal range of 0 through 65535 inclusive with the actual maximum value being UINT_MAX from that same header file.
Beyond that, the standard does not mandate twos complement notation for encoding the values, that's just one of the possibilities. The three allowed types would have encodings of the following for 5 and -5 (using 16-bit data types):
twos complement | ones complement | sign/magnitude
5 0000 0000 0000 0101 | 0000 0000 0000 0101 | 0000 0000 0000 0101
-5 1111 1111 1111 1011 | 1111 1111 1111 1010 | 1000 0000 0000 0101
Note that positive values have the same encoding for all representations, only the negative values are different.
Note further that, for unsigned values, you do not need to use one of the bits for a sign. That means you get more range on the positive side (at the cost of no negative encodings, of course).
And no, 5
and -5
cannot have the same encoding regardless of which representation you use. Otherwise, there'd be no way to tell the difference.