views:

81

answers:

4
#include "stdio.h"

int main()
{
    int x = -13701;
    unsigned int y = 3;
    signed short z = x / y;

    printf("z = %d\n", z);

    return 0;
}

I would expect the answer to be -4567. I am getting "z = 17278". Why does a promotion of these numbers result in 17278?

I executed this in Code Pad.

+4  A: 

Short answer: the division first promotes x to unsigned. Only then the result is cast back to a signed short.

Long answer: read this SO thread.

Eli Bendersky
+3  A: 

The problems comes from the unsigned int y. Indeed, x/y becomes unsigned. It works with :

#include "stdio.h"

int main()
{
    int x = -13701;
    signed int y = 3;
    signed short z = x / y;

    printf("z = %d\n", z);

    return 0;
}
Elenaher
+8  A: 

The hidden type conversions are:

signed short z = (signed short) (((unsigned int) x) / y);

When you mix signed and unsigned types the unsigned ones win. x is converted to unsigned int, divided by 3, and then that result is down-converted to (signed) short. With 32-bit integers:

(unsigned) -13701         == (unsigned) 0xFFFFCA7B // Bit pattern
(unsigned) 0xFFFFCA7B     == (unsigned) 4294953595 // Re-interpret as unsigned
(unsigned) 4294953595 / 3 == (unsigned) 1431651198 // Divide by 3
(unsigned) 1431651198     == (unsigned) 0x5555437E // Bit pattern of that result
(short) 0x5555437E        == (short) 0x437E        // Strip high 16 bits
(short) 0x437E            == (short) 17278         // Re-interpret as short

By the way, the signed keyword is unnecessary. signed short is a longer way of saying short. The only type that needs an explicit signed is char. char can be signed or unsigned depending on the platform; all other types are always signed by default.

John Kugelman
It might be worth noting that in general case the signed-to-unsigned *conversion* is not based on re-interpretation. In fact, conversion and re-interpretation are very, very, very different things, and what we have in this case is actually a *conversion*, not a re-interpretation.
AndreyT
++, nice answer
Eli Bendersky
+1  A: 

Every time you mix "large" signed and unsigned values in additive and multiplicative arithmetic operations, unsigned type "wins" and the evaluation is performed in the domain of the unsigned type ("large" means int and larger). If your original signed value was negative, it first will be converted to positive unsigned value in accordance with the rules of signed-to-unsigned conversions. In your case -13701 will turn into UINT_MAX + 1 - 13701 and the result will be used as the dividend.

Note that the result of signed-to-unsigned conversion on a typical 32-bit int platform will result in unsigned value 4294953595. After division by 3 you'll get 1431651198. This value is too large to be forced into a short object on a platform with 16-bit short type. An attempt to do that results in implementation-defined behavior. So, if the properties of your platform are the same as in my assumptions, then your code produces implementation-defined behavior. Formally speaking, the "meaningless" 17278 value you are getting is nothing more than a specific manifestation of that implementation-defined behavior. It is possible, that if you compiled your code with overflow checking enabled (if your compiler supports them), it would trap on the assignment.

AndreyT
Actually, it's not undefined behaviour: the standard says that either the result is implementation-defined or an implementation-defined signal is raised (conversion to a narrower signed type is distinct from overflow during a calculation).
caf
@caf: You are quite right. Thanks for the correction. Hmm... I remember making that correcting remark myself quite a few times, but in this case I somehow forgot all about it :)
AndreyT