views:

665

answers:

16

I always use unsigned int for values that should never be negative. But today I noticed this situation in my code:

void CreateRequestHeader( unsigned bitsAvailable, unsigned mandatoryDataSize, 
    unsigned optionalDataSize )
{
    If ( bitsAvailable – mandatoryDataSize >= optionalDataSize ) {
        // Optional data fits, so add it to the header.
    }

    // BUG! The above includes the optional part even if
    // mandatoryDataSize > bitsAvailable.
}

Should I start using int instead of unsigned int for numbers, even if they can't be negative?

A: 

You'll need to look at the results of the operations you perform on the variables to check if you can get over/underflows - in your case, the result being potentially negative. In that case you are better off using the signed equivalents.

Timo Geusch
A: 

I don't know if its possible in c, but in this case I would just cast the X-Y thing to an int.

InsertNickHere
The effect of signed overflow (which is what you're relying upon for this solution) is U.B. in Standard C.
Pavel Minaev
@Pavel Minaev I have to admit I don't know the abbrevation U.B.
InsertNickHere
"Undefined behavior".
Pavel Minaev
@Pavel Minaev Thanks.
InsertNickHere
+4  A: 
if (bitsAvailable >= optionalDataSize + mandatoryDataSize) {
    // Optional data fits, so add it to the header.
}

Bug-free, so long as mandatoryDataSize + optionalDataSize can't overflow the unsigned integer type -- the naming of these variables leads me to believe this is likely to be the case.

Stephen Canon
+1  A: 

The situation where (bitsAvailable – mandatoryDataSize) produces an 'unexpected' result when the types are unsigned and bitsAvailable < mandatoryDataSize is a reason that sometimes signed types are used even when the data is expected to never be negative.

I think there's no hard and fast rule - I typically 'default' to using unsigned types for data that has no reason to be negative, but then you have to take to ensure that arithmetic wrapping doesn't expose bugs.

Then again, if you use signed types, you still have to sometimes consider overflow:

MAX_INT + 1

The key is that you have to take care when performing arithmetic for these kinds of bugs.

Michael Burr
The "wrapping" is the only interesting feature of unsigned ints (for regular ints you only have undefined behavior). If the wrapping is going to be a problem (or if you've to be careful to avoid it) then it's a clear sign that "unsigned" was the wrong choice.Using unsigned and having problems with the wrapping (that's the most distinctive feature of unsigned types) is nonsense... when you use unsigned you WANT the wrapping... you should choose unsigned BECAUSE of the wrapping behaviour...
6502
@6502: you make a really good point, and I honestly think that I sometimes use unsigned types when signed types might be a better choice. But I think there are also exceptions; for example, when dealing with file sizes you may need to be able to deal with the full range of `size_t` (or even some larger unsigned type), but you might still need to handle wrapping errors.
Michael Burr
+10  A: 

Should I always ...

The answer to "Should I always ..." is almost certainly 'no', there are a lot of factors that dictate whether you should use a datatype- consistency is important.

But, this is a highly subjective question, it's really easy to mess up unsigneds:

for (unsigned int i = 10; i >= 0; i--);

results in an infinite loop.

This is why some style guides (Google's included) discourage unsigned data types.

In my personal opinion, I haven't run into many bugs caused by these problems with unsigned data types - I'd say use assertions to check your code and use them judiciously (and less when you're performing arithmetic).

Stephen
IMHO, `unsigned` helps catch errors during compilation phase rather than run-time. Ordinal values, such as quantities, should be `unsigned int` rather than `signed int`.
Thomas Matthews
Undetected underflow and overflow are basic C-family gotchas - using signed vs. unsigned changes the error cases, but doesn't get rid of any. Of course having an error case right adjacent to zero can be a *particularly* bad thing, but as you say, it depends what you're doing. In the loop above, you could check for `!= ~0` as your end condition - it's a useful unsigned invalid/end value. It's a slight cheat (`0` is int, so `~0` is `-1`) but on sane machines the implicit cast just works, and visually it's less wierd than having an unsigned `-1`.
Steve314
@Thomas : Thanks, for the feedback, but I'm not entirely sure I agree. c (and c++) provides implicit conversions between `signed` and `unsigned` types, which can yield silent and surprising results. There aren't too many syntactic constraints between the two that can trigger a compilation failure (unless you pass additional compiler warning flags). The benefit of an `unsigned` type is mostly semantic, unless you're specifically using the unsigned type to avoid manipulation of the sign bit (e.g. in a bitmask).
Stephen
@Steve314 : Yep, there are certainly ways to avoid this - but they aren't as intuitive to read as `>=0`... which is why it became a 'gotcha' :)
Stephen
`for (unsigned i = 10; i != -1; --i)` is perfectly fine.
Alexandre C.
Bad Things™ happen when you use signed numbers to represent size-parameters. See my post.
BlueRaja - Danny Pflughoeft
+6  A: 

You can't fully avoid unsigned types in portable code, because many typedefs in the standard library are unsigned (most notably size_t), and many functions return those (e.g. std::vector<>::size()).

That said, I generally prefer to stick to signed types wherever possible for the reasons you've outlined. It's not just the case you bring up - in case of mixed signed/unsigned arithmetic, the signed argument is quietly promoted to unsigned.

Pavel Minaev
A: 

If your numbers should never be less than zero, but have a chance to be < 0, by all means use signed integers and sprinkle assertions or other runtime checks around. If you're actually working with 32-bit (or 64, or 16, depending on your target architecture) values where the most significant bit means something other than "-", you should only use unsigned variables to hold them. It's easier to detect integer overflows where a number that should always be positive is very negative than when it's zero, so if you don't need that bit, go with the signed ones.

Nathon
A: 

Suppose you need to count from 1 to 50000. You can do that with a two-byte unsigned integer, but not with a two-byte signed integer (if space matters that much).

John at CashCommons
Why can't you? Do you mean 2 byte (16 bit) values instead?
Sam
What I can't do is count. Fixed. ;)
John at CashCommons
+2  A: 

From the comments on one of Eric Lipperts Blog Posts (See here):

Jeffrey L. Whitledge

I once developed a system in which negative values made no sense as a parameter, so rather than validating that the parameter values were non-negative, I thought it would be a great idea to just use uint instead. I quickly discovered that whenever I used those values for anything (like calling BCL methods), they had be converted to signed integers. This meant that I had to validate that the values didn't exceed the signed integer range on the top end, so I gained nothing. Also, every time the code was called, the ints that were being used (often received from BCL functions) had to be converted to uints. It didn't take long before I changed all those uints back to ints and took all that unnecessary casting out. I still have to validate that the numbers are not negative, but the code is much cleaner!

Eric Lippert

Couldn't have said it better myself. You almost never need the range of a uint, and they are not CLS-compliant. The standard way to represent a small integer is with "int", even if there are values in there that are out of range. A good rule of thumb: only use "uint" for situations where you are interoperating with unmanaged code that expects uints, or where the integer in question is clearly used as a set of bits, not a number. Always try to avoid it in public interfaces. - Eric

Brian
That's in regards to C#, not C
BlueRaja - Danny Pflughoeft
@BlueRaja: The specific examples are C#-specific, but the general points the comments make are still quite true.
Brian
As I mention in my post, you **should** be using unsigned data types for APIs which require a size-parameter (use `size_t`). This is not the case in .Net, where buffer overflows are a non-issue.
BlueRaja - Danny Pflughoeft
@BlueRaja: The quote explicitly states that you should use unsigned data types when calling code that expects unsigned int.
Brian
I meant you should be using unsigned data types for your own APIs which require a size-parameter (in C), regardless of what you're calling.
BlueRaja - Danny Pflughoeft
@BlueRaja: I fail to see how that is not something which would be naturally concluded from the quote. Although you should try to avoid it in public interfaces, if the code your public interface is interacting with is expecting an unsigned data type, then you will need to use an unsigned value to interoperate with the code you are calling through that interface.
Brian
A: 

If there is a possibility of overflow, then assign the values to the next highest data type during the calculation, ie:

void CreateRequestHeader( unsigned int bitsAvailable, unsigned int mandatoryDataSize, unsigned int optionalDataSize ) 
{ 
    signed __int64 available = bitsAvailable;
    signed __int64 mandatory = mandatoryDataSize;
    signed __int64 optional = optionalDataSize;

    if ( (mandatory + optional) <= available ) { 
        // Optional data fits, so add it to the header. 
    } 
} 

Otherwise, just check the values individually instead of calculating:

void CreateRequestHeader( unsigned int bitsAvailable, unsigned int mandatoryDataSize, unsigned int optionalDataSize ) 
{ 
    if ( bitsAvailable < mandatoryDataSize ) { 
        return;
    } 
    bitsAvailable -= mandatoryDataSize;

    if ( bitsAvailable < optionalDataSize ) { 
        return;
    } 
    bitsAvailable -= optionalDataSize;

    // Optional data fits, so add it to the header. 
} 
Remy Lebeau - TeamB
+1  A: 

No you should use the type that is right for your application. There is no golden rule. Sometimes on small microcontrollers it is for example more speedy and memory efficient to use say 8 or 16 bit variables wherever possible as that is often the native datapath size, but that is a very special case. I also recommend using stdint.h wherever possible. If you are using visual studio you can find BSD licensed versions.

+6  A: 

Bjarne Stroustrup, creator of C++, warns about using unsigned types in his book The C++ programming language:

The unsigned integer types are ideal for uses that treat storage as a bit array. Using an unsigned instead of an int to gain one more bit to represent positive integers is almost never a good idea. Attempts to ensure that some values are positive by declaring variables unsigned will typically be defeated by the implicit conversion rules.

5ound
Yet the standard library used unsigned types for container size (a major source of bugs in C++ programs)...
6502
@6502 I would interface with the standard containers using iterators for almost every task except the most trivial or throw-away snippets.
AraK
+6  A: 

The answer is Yes. The "unsigned" int type of C and C++ is not an "always positive integer", no matter what the name of the type looks like. The behavior of C/C++ unsigned ints has no sense if you try to read the type as "non-negative"... for example:

  • The difference of two unsigned is an unsigned number (makes no sense if you read it as "The difference between two non-negative numbers is non-negative")
  • The addition of an int and an unsigned int is unsigned
  • There is an implicit conversion between int and unsigned int (if you read unsigned as "non-negative" it's the opposite conversion that would make sense)
  • If you declare a function accepting an unsigned parameter when someone passes a negative int you simply get that implicitly converted to a huge positive value; in other words using an unsigned parameter type doesn't help you finding errors neither at compile time nor at runtime.

Indeed unsigned numbers are very useful for certain cases because they are elements of the ring "integers-modulo-N" with N being a power of two. Unsigned ints are useful when you want to use that modulo-n arithmetic, or as bitmasks; they are NOT useful as quantities.

Unfortunately in C and C++ unsigned were also used to represent non-negative quantities to be able to use all 16 bits when the integers where that small... at that time being able to use 32k or 64k was considered a big difference. I'd classify it basically as an historical accident... you shouldn't try to read a logic in it because there was no logic.

By the way in my opinion that was a mistake... if 32k are not enough then quite soon 64k won't be enough either; abusing the modulo integer just because of one extra bit in my opinion was a cost too high to pay. Of course it would have been reasonable to do if a proper non-negative type was present or defined... but the unsigned semantic is just wrong for using it as non-negative.

Sometimes you may find who says that unsigned is good because it "documents" that you only want non-negative values... however that documentation is of any value only for people that don't actually know how unsigned works for C or C++. For me seeing an unsigned type used for non-negative values simply means that who wrote the code didn't understand the language on that part.

If you really understand and want the "wrapping" behavior of unsigned ints then they're the right choice (for example I almost always use "unsigned char" when I'm handling bytes); if you're not going to use the wrapping behavior (and that behavior is just going to be a problem for you like in the case of the difference you shown) then this is a clear indicator that the unsigned type is a poor choice and you should stick with plain ints.

Does this means that C++ std::vector<>::size() return type is a bad choice ? Yes... it's a mistake. But be prepared to be called bad names by who doesn't understand that the "unsigned" name is just a name... what it counts is the behavior and that is a "modulo-n" behavior (and no one would consider a "modulo-n" type for the size of a container a sensible choice).

6502
-1. Er, I mean +4294967295 :) `unsigned`'s semantics are illogical.
dan04
+7  A: 

Some cases where you should use unsigned integer types are:

  • You need to treat a datum as a pure binary representation.
  • You need the semantics of modulo arithmetic you get with unsigned numbers.
  • You have to interface with code that uses unsigned types (e.g. standard library routines that accept/return size_t values.

But for general arithmetic, the thing is, when you say that something "can't be negative," that does not necessarily mean you should use an unsigned type. Because you can put a negative value in an unsigned, it's just that it will become a really large value when you go to get it out. So, if you mean that negative values are forbidden, such as for a basic square root function, then you are stating a precondition of the function, and you should assert. And you can't assert that what cannot be, is; you need a way to hold out-of-band values so you can test for them (this is the same sort of logic behind getchar() returning an int and not char.)

Additionally, the choice of signed-vs.-unsigned can have practical repercussions on performance, as well. Take a look at the (contrived) code below:

#include <stdbool.h>

bool foo_i(int a) {
    return (a + 69) > a;
}

bool foo_u(unsigned int a)
{
    return (a + 69u) > a;
}

Both foo's are the same except for the type of their parameter. But, when compiled with c99 -fomit-frame-pointer -O2 -S, you get:

        .file   "try.c"
        .text
        .p2align 4,,15
.globl foo_i
        .type   foo_i, @function
foo_i:
        movl    $1, %eax
        ret
        .size   foo_i, .-foo_i
        .p2align 4,,15
.globl foo_u
        .type   foo_u, @function
foo_u:
        movl    4(%esp), %eax
        leal    69(%eax), %edx
        cmpl    %eax, %edx
        seta    %al
        ret
        .size   foo_u, .-foo_u
        .ident  "GCC: (Debian 4.4.4-7) 4.4.4"
        .section        .note.GNU-stack,"",@progbits

You can see that foo_i() is more efficient than foo_u(). This is because unsigned arithmetic overflow is defined by the standard to "wrap around," so (a + 69u) may very well be smaller than a if a is very large, and thus there must be code for this case. On the other hand, signed arithmetic overflow is undefined, so GCC will go ahead and assume signed arithmetic doesn't overflow, and so (a + 69) can't ever be less than a. Choosing unsigned types indiscriminately can therefore unnecessarily impact performance.

Cirno de Bergerac
+2  A: 

I seem to be in disagreement with most people here, but I find unsigned types quite useful, but not in their raw historic form.

If you consequently stick to the semantic that a type represents for you, then there should be no problem: use size_t (unsigned) for array indices, data offsets etc. off_t (signed) for file offsets. Use ptrdiff_t (signed) for differences of pointers. Use uint8_t for small unsigned integers and int8_t for signed ones. And you avoid at least 80% of potability problems.

And don't use int, long, unsigned, char if you mustn't. They belong in the history books. (Sometimes you must, error returns, bit fields, e.g)

And to come back to your example:

bitsAvailable – mandatoryDataSize >= optionalDataSize

can be easily rewritten as

bitsAvailable >= optionalDataSize + mandatoryDataSize

which doesn't avoid the problem of a potential overflow (assert is your friend) but gets you a bit nearer to the idea of what you want to test, I think.

Jens Gustedt
I like this: It's a good idea to avoid subtraction if you are using unsigned types.
Steve Hanov
+1  A: 

One thing that hasn't been mentioned is that interchanging signed/unsigned numbers can lead to security bugs. This is a big issue, since many of the functions in the standard C-library take/return unsigned numbers (fread, memcpy, malloc etc. all take size_t parameters)

For instance, take the following innocuous example (from real code):

//Copy a user-defined structure into a buffer and process it
char* processNext(char* data, short length)
{
    char buffer[512];
    if (length <= 512) {
        memcpy(buffer, data, length);
        process(buffer);
        return data + length;
    } else {
        return -1;
    }
}

Looks harmless, right? The problem is that length is signed, but is converted to unsigned when passed to memcpy. Thus setting length to SHRT_MIN will validate the <= 512 test, but cause memcpy to copy more than 512 bytes to the buffer - this allows an attacker to overwrite the function return address on the stack and (after a bit of work) take over your computer!

You may naively be saying, "It's so obvious that length needs to be size_t or checked to be >= 0, I could never make that mistake". Except, I guarantee that if you've ever written anything non-trivial, you have. So have the authors of Windows, Linux, BSD, Solaris, Firefox, OpenSSL, Safari, MS Paint, Internet Explorer, Google Picasa, Opera, Flash, Open Office, Subversion, Apache, Python, PHP, Pidgin, Gimp, ... on and on and on ... - and these are all bright people whose job is knowing security.

In short, always use size_t for sizes.

Man, programming is hard.

BlueRaja - Danny Pflughoeft
No, **forgetting bounds checking** results in security bugs. If you got it wrong in the other direction, `unsigned` wouldn't help you, your function would happily write to `myArray[0xFFFFFFFF]`.
dan04
@dan04: No, **the root cause is using signed ints when you should be using unsigned ints** like `size_t` *(or, more precisely, it's the implicit conversion between signed/unsigned numbers)*. Of course, forgetting to check bounds is also a problem. I've changed the example to make this more clear - thanks.
BlueRaja - Danny Pflughoeft