tags:

views:

304

answers:

7
+8  Q: 

uint vs int in C#

I have observed for a while that C# programmers tend to use int everywhere, and rarely resort to uint. But I have never discovered a satisfactory answer as to why.

If interoperability is your goal, uint shouldn't appear in public APIs because not all CLI languages support unsigned integers. But that doesn't explain why int is so prevalent, even in internal classes. I suspect this is the reason uint is used sparingly in the BCL.

In C++, if you have an integer for which negative values make no sense, you choose an unsigned integer.

This clearly signifies that negative numbers are not allowed or expected, and the compiler will do some checking for you. I also suspect in the case of array indices, that the JIT can easily drop the lower bounds check.

However, when mixing int and unit types, extra care and casts will be needed.

Should uint be used more? Why?

+14  A: 

int is shorter to type than uint.

jjnguy
LOL! That's a great one.
Reed Copsey
I suspect this is pretty close to the truth. Why use `uint` when 99% of the time (in my experience) `int` will suffice?
Matthew Jones
Yeah, I've never had a reason to use a `uint`. And, if I need to store a larger number I just use a `long`. It is nice to be able to use -1 as an error or default value.
jjnguy
@Justin: "Magic numbers" like -1 are not a good idea, in general. Switching to long means using 2x memory for no reason, as well... "unit" definitely is valuable, provided you don't need to interact with other APIs.
Reed Copsey
I never feel comfortable using an `int` to index an array, because I'm not ever going to have a negative index. Seems blindly obvious that a `uint` should be used in this case.
Mark H
Not to mention, more readable. If you ever pass your code/algorithms for someone else to read that may be less experienced than you, using lots of `uint` can hang them up a bit. `int` is perfectly acceptable to use in all situations where you control the range it will take.
drharris
I rather enjoyed the "old days" of Delphi's range types. That really tripped people's triggers when an array can be from 5..6.
Jesse C. Slicer
+10  A: 

Your observation of why uint isn't used in the BCL is the main reason, I suspect.

UInt32 is not CLS Compliant, which means that it is wholly inappropriate for use in public APIs. If you're going to be using uint in your private API, this will mean doing conversions to other types - and it's typically easier and safer to just keep the type the same.

I also suspect that this is not as common in C# development, even when C# is the only language being used, primarily because it is not common in the BCL. Developers, in general, try to (thankfully) mimic the style of the framework on which they are building - in C#'s case, this means trying to make your APIs, public and internal, look as much like the .NET Framework BCL as possible. This would mean using uint sparingly.

Reed Copsey
http://stackoverflow.com/questions/2013116/should-i-use-uint-in-c-for-values-that-cant-be-negative is a question that deals with a similar topic
Stephan
+1  A: 

I prefer uint to int unless a negative number is actually in the range of acceptable values. In particular, accepting an int param but throwing an ArgumentException if the number is less than zero is just silly--use a uint!

I agree that uint is underused, and I encourage everyone else to use it more.

JSBangs
It is very dangerous to accept only uints and not check the bounds. If someone passes a negative value, the CLR will interpret it as a large int, meaning for -1 you get uint.maxvalue. This is not desired behaviour.
Henri
@Henri: C# doesn't have an implicit conversion from int to uint, so there is no "If someone passes a negative value". Of course a bounds check on the upper limit is still appropriate (but now you only need one check instead of two).
Ben Voigt
A: 

I think it is just laziness. C# is inherently a choice for development on desktops and other machines with relatively much resources.

C and C++, however, has deep roots in old systems and embedded systems where memory is sparse, so programmers are used to think carefully what datatype to use. C# programmers are lazy, and since there are enough resources in general, nobody really optimizes memory usage (in general, not always of course). Event if a byte would be sufficient, a lot of C# programmers, including me, just use int for simplicity. Moreover, a lot of API functions accept ints, so it prevents casting.

I agree that choosing the correct datatype is good practice, but I think the main motivation is laziness.

Finally, choosing an integer is more mathematically correct. Unsigned ints don't exist in math (only natural numbers). And since most programmers have a mathematical background, using an integer is more natural.

Henri
I would not say it's laziness, although laziness has it's merits. It's more that most of the time, I just don't care enough about the int/uint thing, to waste my brain cycles on such a decision, and just go with the int. Hardware is cheep, programmers can be expensive.
SWeko
Programmers are lazy. That's a bad thing. Raymond would say that programmers hate to pay their taxes!
Lorenzo
I would be the first to admin that us C# programmers are lazy but that isn't necessarily a bad thing.
ChaosPandion
@Lorenzo, I wrote an article in university, stating that a lazy programmer is a good programmer. Mostly it was about optimizing for programmer time instead of machine time.
Eloff
Hmm, most of the programmers-imputable bugs I've ever seen (or done) are caused by laziness...
Lorenzo
A: 

1) Bad habit. Seriously. Even in C/C++.

Think of the common for pattern:

for( int i=0; i<3; i++ )
    foo(i);

There's absolutely no reason to use an integer there. You will never have negative values. But almost everyone will do a simple loop that way, even if it contains (at least) two other "style" errors.

2) int is perceived as the native type of the machine.

Lorenzo
A: 

I think a big part of the reason is that when C first came out most of the examples used int for brevity's sake. We rejoiced at not having to write integer like we did with Fortran and Pascal, and in those days we routinely used them for mundane things like array indices and loop counters. Unsigned integers were special cases for large numbers that needed that last extra bit. I think it's a natural progression that C habits continued into C# and other new languages like Python.

ebpower
A: 

I program at a lower level application layer where ints rarely get above 100, so negative values are not an issue (e.g. for i < myname.length() type stuff) it's just an old C habit - and shorter to type as mentioned above. However, in some cases, when interfacing to hardware where I'm dealing with event flags from devices, the uint is important in cases where a flag may use the left (highest) most bit.

Honestly, for 99.9% of my work I could easily use ushort, but int, you know, sounds sounds a lot better than ushort.

ddm