I heard C, C++, Java uses two complements for binary representation. Why not use 1 complement? Is there any advantage to use 2 complement over 1 complement?
Is this a homework question? If so, think of how you would represent 0 in a 1's complement system.
The internal representation of numbers is not part of any of those languages, it's a feature of the architecture of the machine itself. Most implementations use 2's complement because it makes addition and subtraction the same binary operation (signed and unsigned operations are identical).
At least C and C++ offer 1's complement negation (which is the same as bitwise negation) via the language's ~
operator. Most processors - and all modern ones - use 2's complement representation for a couple reasons:
- Adding positive and negative numbers is the same as adding unsigned integers
- No "wasted" values due to two representations of 0 (+0 and -0)
Edit: The draft of C++0x does not specify whether signed integer types are 1's complement or 2's complement, which means it's highly unlikely that earlier versions of C and C++ did specify it. What you have observed is implementation-defined behavior, which is 2's complement on at least modern processors for performance reasons.
Working with two's complement signed integers is a lot cleaner. You can basically add signed values as if they were unsigned and have things work as you might expect, rather than having to explicitly deal with an additional carry addition. It is also easier to check if a value is 0, because two's complement only contains one 0 value, whereas one's complement allows one to define both a positive and a negative zero.
As for the additional carry addition, think about adding a positive number to a smallish negative number. Because of the one's complement representation, the smallish negative number will actually be fairly large when viewed as an unsigned quantity. Adding the two together might lead to an overflow with a carry bit. Unlike unsigned addition, this doesn't necessarily mean that the value is too large to represent in the one's complement number, just that the representation temporarily exceeded the number of bits available. To compensate for this, you add the carry bit back in after adding the two one's complement numbers together.
The answer is different for different languages.
In the case of C, you could in theory implement the language on a 1's complement machine ... if you could still find a working 1's complement machine to run your programs! Using 1's complement would introduce portability issues, but that's the norm for C. I'm not sure what the deal is for C++, but I wouldn't be surprised if it is the same.
In the case of Java, the language specification sets out precise sizes and representations for the primitive types, and precise behaviour for the arithmetic operators. This is done to eliminate the portability issues that arise when you make these things implementation specific. The Java designers specified 2's complement arithmetic because all modern CPU architectures implement 2's complement and not 1's complement integers.
For reasons why modern hardware implements 2's complement and not 1's complement, take a look at (for example) the Wikipedia pages on the subject. See if you can figure out the implications of the alternatives.
Almost all existing CPU hardware uses two's complement, so it makes sense that most programming languages do, too.
C and C++ support one's complement, if the hardware provides it.