If I do the following:
int c0 = CHAR_MAX; //8 bit
int c1 = CHAR_MAX; //8-bit
int i = c0*c1; //store in 32-bit variable
printf("%d\n", i); //prints 16129
We can see that there is no problem with to 8-bit numbers being multiplied together, and producing a 32-bit output.
However, if I do
int i0 = INT_MAX; //32-bit
int i1 = INT_MAX; //32 bit variable
long long int ll = i0*i1; //store in 64-bit variable
printf("%lld\n", ll); //prints 1..overflow!!
In this case, two 32-bit variables were multiplied together, overflowed, and then were assigned to the 64-bit variable.
So why did this overflow happen when multiplying the ints, but not the chars? Is it dependent on the default word-size of my machine? (32-bits)