views:

629

answers:

8

When programming in a C-like language should one's "default" integer type be int or uint/unsigned int? By default, I mean when you don't need negative numbers but either one should be easily big enough for the data you're holding. I can think of good arguments for both:

signed: Better-behaved mathematically, less possibility of weird behavior if you try to go below zero in some boundary case you didn't think of, generally avoids odd corner cases better.

unsigned: Provides a little extra assurance against overflow, just in case your assumptions about the values are wrong. Serves as documentation that the value represented by the variable should never be negative.

+2  A: 

I tend to go with signed, unless I know I need unsigned, as int is typically signed, and it takes more effort to type unsigned int, and uint may cause another programmer a slight pause to think about what the values can be.

So, I don't see any benefit to just defaulting to an unsigned, since the normal int is signed.

James Black
+1 I would state that more strongly: most programmers' expectations (especially if coming from, say, Java) are that `int` is signed by default. It would be weird to see `int` in the code and for it to show unsigned behaviour.
Chris Jester-Young
+4  A: 
MadKeithV
+1, thanks, was just going to ask the same question. This bit is the clincher for me: "...that you need the additional value space."
James Morris
+6  A: 

The Google C++ Style Guide has an interesting opinion on unsigned integers:

(quote follows:)

On Unsigned Integers

Some people, including some textbook authors, recommend using unsigned types to represent numbers that are never negative. This is intended as a form of self-documentation. However, in C, the advantages of such documentation are outweighed by the real bugs it can introduce. Consider:

for (unsigned int i = foo.Length()-1; i >= 0; --i) ...

This code will never terminate! Sometimes gcc will notice this bug and warn you, but often it will not. Equally bad bugs can occur when comparing signed and unsigned variables. Basically, C's type-promotion scheme causes unsigned types to behave differently than one might expect.

So, document that a variable is non-negative using assertions. Don't use an unsigned type.

(end quote)

Kinopiko
Actually, I think that C's type-promotion scheme causes *signed* ints to behave differently than one might expect. Unsigned ints behave fine, if you know modulo arithmetic, whereas signed ints have implementation-dependent behaviour and funny conversions all over the place. But if you're only going to use one, to avoid mixing them, it's going to have to be signed. So I agree with the conclusion that unsigned int shouldn't be used willy-nilly, but actually I think downward loops shouldn't be used willy-nilly either...
Steve Jessop
A: 

I doubt there is a really good language-agnostic answer to this. There are enough differences between languages and how they handle mixed types that no one answer is going to make sense for all (or even most).

In the languages I use most often, I use signed unless I have a specific reason to do otherwise. That's mostly C and C++ though. In another language, I might well give a different answer.

Jerry Coffin
+1  A: 

You don't get much 'assurance against overflow' with unsigned. You're as likely to get different but stranger behaviour than with signed, but slightly later... Better to get those assumptions right before hand maybe?

martin clayton
+1  A: 

Giving a more specific type assignment (like unsigned int) conveys more information about the usage of the variable, and can help the compiler to keep track of any times when you're assigning an "incorrect" value. For instance, if you're using a variable to track the database ID of an object/element, there (likely) should never be a time when the ID is less than zero (or one); in this sort of case, rather than asserting that state, using an unsigned integer value conveys that statement to other developers as well as the compiler.

John Reese
+1  A: 

As a rough rule of thumb, I used unsigned ints for counting things, and signed ints for measuring things.

If you find yourself decrementing or subtracting from an unsigned int, then you should be in a context where you already expect to be taking great care not to underflow (for instance, because you're in some low-level code stepping back from the end of a string, so of course you have first ensured that the string is long enough to support this). If you aren't in a context like that, where it's absolutely critical that you don't go below zero, then you should have used a signed value.

In my usage, unsigned ints are for values which absolutely cannot go negative (or for that one in a million situation where you actually want modulo 2^N arithmetic), not for values which just so happen not to be negative, in the current implementation, probably.

Steve Jessop
A: 

by default int is unsigned, because of in c-like languages it will take positive values only if the programmer does not given any sign...

Anjaiah