views:

433

answers:

5

Hi,

I have a small sample function:

#define VALUE 0

int test(unsigned char x) {
  if (x>=VALUE)
    return 0;
  else
    return 1;
}

My compiler warns me that the comparison (x>=VALUE) is true in all cases, which is right, because x is an unsigned character and VALUE is defined with the value 0. So I changed my code to:

if ( ((signed int) x ) >= ((signed int) VALUE ))

But the warning comes again. I tested it with three GCC versions (all versions > 4.0, sometimes you have to enable -Wextra).

In the changed case, I have this explicit cast and it should be an signed int comparison. Why is it claiming, that the comparison is always true?

+1  A: 

The #define of VALUE to 0 means that your function is reduced to this:

int test(unsigned char x) {
  if (x>=0)
    return 0;
  else
    return 1;
}

Since x is always passed in as an unsigned char, then it will always have a value between 0 and 255 inclusive, regardless of whether you cast x or 0 to a signed int in the if statement. The compiler therefore warns you that x will always be greater than or equal to 0, and that the else clause can never be reached.

Paul Stephenson
+7  A: 

Even with the cast, the comparison is still true in all cases of defined behavior. The compiler still determines that (signed int)0 has the value 0, and still determines that (signed int)x) is non-negative if your program has defined behavior (casting from unsigned to signed is undefined if the value is out of range for the signed type).

So the compiler continues warning because it continues to eliminate the else case altogether.

Edit: To silence the warning, write your code as

#define VALUE 0

int test(unsigned char x) {
#if VALUE==0
  return 1;
#else
  return x>=VALUE;
#endif
}
Martin v. Löwis
An `unsigned char` value can *never* be out of range of a `signed int`. Even if the sign bit is set, the value is well within the range of a `signed int`, and the operation is always well-defined (and the result is always positive).
Konrad Rudolph
Oops, I missed that it was from char to int. Still, this can happen on a (theoretical) implementation of C where sizeof(int)==1. Given that int must have at least 2 octets, this would be possible on an implementation where a char is also two octets. I agree that on realistic platforms, no overflow can happen here.
Martin v. Löwis
+9  A: 

i think gcc is smarter than you in this case

Eimantas
+3  A: 

x is an unsigned char, meaning it is between 0 and 256. Since an int is bigger than a char, casting unsigned char to signed int still retains the chars original value. Since this value is always >= 0, your if is always true.

Corey D
+1  A: 

All the values of an unsigned char can fir perfectly in your int, so even with the cast you will never get a negative value. The cast you need is to signed char - however, in that case you should declare x as signed in the function signature. There is no point lying to the clients that you need an unsigned value while in fact you need a signed one.

Bojan Resnik