views:

390

answers:

9

What requires the most CPU:

int foo = 3;

or typecasting it to an unsigned int?

unsigned int foo = 3;
+18  A: 

They are both represented by the same bits in memory. It's just a question of how the bits are interpreted, which depends on the range of values you want. So, neither is more intensive.

As for working with the numbers, taking the example of multiply: I'm no expert but according to wikipedia, in x86 there are opcodes for unsigned multiply (mul) and signed multiply (imul) which would mean that they probably take the same length of time.

Joe
Very well explained. And thanks for your quick reply's all!
Midas
+5  A: 

There is no difference in CPU usage.

In fact, you can almost guarantee that those declarations will compile to the same code.

The only difference is that the compiler will remember whether the variable is signed or unsigned, and decide how to implement operations such as comparisons or deciding which overloaded functions to call.

Shmoopty
+1  A: 

In this example, there will be no cost difference at all.

John Knoeller
+3  A: 

My immediate thought is: it is not casting the int into unsigned int. So there is no difference in speed. hereis the link about the fast types. However it's more the algorithms which and functions which should be optimised rather than types.

Vitalij
What you say is easy to understand so I take this as my accepted answer.
Midas
@Midas: If you can't understand the other answers, maybe you shouldn't concern yourself with micro-optimization like this yet
Matti Virkkunen
+10  A: 

This is generated assembly in MS VC 2005:

; 9    :  int foo = 3;

 mov DWORD PTR _foo$[ebp], 3

; 10   :  unsigned int bar = 3;

 mov DWORD PTR _bar$[ebp], 3

No difference :)

Igor Zevaka
A: 

No difference. You can tell by looking at the generated assembly language. Even if there were a difference, you should be aware that this is micro-optimization, and the right time to think about that is after all the macro-optimization has been done, and that is your job, not the compiler's.

Mike Dunlavey
A: 

Declaring a plain int without specifying signedness should — in principle — let the compiler know that it is free to choose the most natural kind of integer in terms of size and signedness. In practice, though, there is no speed difference on modern desktop processors (although maybe on certain embedded systems).

jleedev
+1  A: 

When you do division. using unsigned int is faster than int. (of course when minus is not needed actually.)

sevity
A: 

You won't normally notice any difference in practice, so in my opinion the best practice is to choose the type based on the application-specific semantics of the value it is supposed to represent: use unsigned types to represent unsigned quantities; same for signed. Although there are people who oppose this philosophy and prefer to use int wherever they can and unsigned int only when they have to.

As for the actual performance, if you are still concerned about it, it, of course, depends on the compiler and hardware platform. One thing that comes to mind, for example, is that the semantics of signed integral division required by C/C++ (round towards zero) is often different from the semantics implemented by the hardware platform (round towards negative infinity), which means that for signed types the division might require more instructions (to adjust the result) than for unsigned types. There are other little quirks like that. But in any case, you won't notice it in practice.

P.S. I don't know why you mention "typecasting" in your question, while the code sample that follows has no typecasting in it.

AndreyT