views:

184

answers:

3

If I do the following:

int c0 = CHAR_MAX; //8 bit
int c1 = CHAR_MAX; //8-bit
int i = c0*c1; //store in 32-bit variable
printf("%d\n", i); //prints 16129

We can see that there is no problem with to 8-bit numbers being multiplied together, and producing a 32-bit output.

However, if I do

int i0 = INT_MAX; //32-bit
int i1 = INT_MAX; //32 bit variable
long long int ll = i0*i1; //store in 64-bit variable
printf("%lld\n", ll); //prints 1..overflow!!

In this case, two 32-bit variables were multiplied together, overflowed, and then were assigned to the 64-bit variable.

So why did this overflow happen when multiplying the ints, but not the chars? Is it dependent on the default word-size of my machine? (32-bits)

+3  A: 

You should change your second code sample like

int i0 = INT_MAX; //32-bit
int i1 = INT_MAX; //32 bit variable
long long ll = ((long long)i0)*i1; //compute and store in 64-bit variable
printf("%lld\n", ll);

that is, cast (at least one of the) the ints to 64 bit before multiplying them. Otherwise the overflow happens because the result is attempted to be stored in a temporary of type int before assigning it to the long long variable. The result of any expression is casted to the precision of its member with the highest precision.

In the first example, an int is large enough to hold the result of multiplying chars, so there is no overflow.

As a side note, naming your variable ll is not recommended as it is very difficult to differentiate between the digit '1' and the lowercase letter 'l'.

Péter Török
No need to cast both. One is enough. Also "long long int" is redundant, "long long" is enough.
iconiK
@iconiK, just fixed the first issue while you typed in your comment :-) I will fix the second too, thanks.
Péter Török
A: 

How Typecast works...


Unless explicit typecast is specified, any expression is typecasted to the precision of the highest precision variable/constant involved.

As Peter pointed out, using an explicit typecast in the expression to forces higher precison.

NOTE: I didn't get the "long long int" part. Maybe its been a long time since i saw one... ;-)

  • Does long long int really declare a 64-bit int??

which compiler are U using??

CVS-2600Hertz
Yes, long long is at least 64-bits.
iconiK
+1  A: 

There's a logic fault in your explanation of what is going on.

On at least Linux systems, CHAR_MAX certainly isn't an 8-bit number. It's a (more or less) plain preprocessor define, like so:

#  define SCHAR_MAX     127

/* Maximum value an `unsigned char' can hold.  (Minimum is 0.)  */
#  define UCHAR_MAX     255

/* Minimum and maximum values a `char' can hold.  */
#  ifdef __CHAR_UNSIGNED__
#   define CHAR_MIN     0
#   define CHAR_MAX     UCHAR_MAX
#  else
#   define CHAR_MIN     SCHAR_MIN
#   define CHAR_MAX     SCHAR_MAX
#  endif

So, for a system with signed chars, the two last lines are in effect, which means that when you write CHAR_MAX in your code, the compiler sees a plain 127, which has type int.

This means that the multiplication CHAR_MAX * CHAR_MAX happens at int precision.

unwind
**ALL** arithmetic happens at at-least-int precision. In fact, it is impossible to have an expression of type smaller than `int` without an explicit cast.
R..