any integer is converted to a given unsigned type by finding the smallest non negative value that is congruent to that integer,modulo one more than the largest value that can be represented in the unsigned type.
Let's take this bit by bit and from backwards:
What is the largest value that can be represented in the unsigned type of width n bits?
2^(n) - 1.
What is one more than this value?
2^n.
How does the conversion take place?
unsigned_val = signed_val % 2^n
Now, the why part: The standard does not mandate what bit representation is used. Hence the jargon. In a two's complement representation -- which is by far the most commonly used -- this conversion does not change the bit pattern (unless there is a a truncation, of course).
Refer to Integral Conversions from the Standard for further details.