Im wondering about this: when I try to assign an integer value to an int variable (16-bit compiler, 2 bytes for integers) lets say:
int a;
a=40000;
that can't be represented with the range of the type it will be truncated. But what im seeing is that the resulting value in a is the bit pattern for -25000 (or some close number) wich means that the binary representation that the compiler choose for decimal 40000 was unsigned integer representation. And that raises my question: how does the compiler chooses the type for this literal expressions?
Im guessing it uses the type capable of handling the value with less storage space needed.