What requires the most CPU:
int foo = 3;
or typecasting it to an unsigned int?
unsigned int foo = 3;
What requires the most CPU:
int foo = 3;
or typecasting it to an unsigned int?
unsigned int foo = 3;
They are both represented by the same bits in memory. It's just a question of how the bits are interpreted, which depends on the range of values you want. So, neither is more intensive.
As for working with the numbers, taking the example of multiply: I'm no expert but according to wikipedia, in x86 there are opcodes for unsigned multiply (mul
) and signed multiply (imul
) which would mean that they probably take the same length of time.
There is no difference in CPU usage.
In fact, you can almost guarantee that those declarations will compile to the same code.
The only difference is that the compiler will remember whether the variable is signed
or unsigned
, and decide how to implement operations such as comparisons or deciding which overloaded functions to call.
My immediate thought is: it is not casting the int into unsigned int. So there is no difference in speed. hereis the link about the fast types. However it's more the algorithms which and functions which should be optimised rather than types.
This is generated assembly in MS VC 2005:
; 9 : int foo = 3;
mov DWORD PTR _foo$[ebp], 3
; 10 : unsigned int bar = 3;
mov DWORD PTR _bar$[ebp], 3
No difference :)
No difference. You can tell by looking at the generated assembly language. Even if there were a difference, you should be aware that this is micro-optimization, and the right time to think about that is after all the macro-optimization has been done, and that is your job, not the compiler's.
Declaring a plain int
without specifying signedness should — in principle — let the compiler know that it is free to choose the most natural kind of integer in terms of size and signedness. In practice, though, there is no speed difference on modern desktop processors (although maybe on certain embedded systems).
When you do division.
using unsigned int
is faster than int
.
(of course when minus is not needed actually.)
You won't normally notice any difference in practice, so in my opinion the best practice is to choose the type based on the application-specific semantics of the value it is supposed to represent: use unsigned types to represent unsigned quantities; same for signed. Although there are people who oppose this philosophy and prefer to use int
wherever they can and unsigned int
only when they have to.
As for the actual performance, if you are still concerned about it, it, of course, depends on the compiler and hardware platform. One thing that comes to mind, for example, is that the semantics of signed integral division required by C/C++ (round towards zero) is often different from the semantics implemented by the hardware platform (round towards negative infinity), which means that for signed types the division might require more instructions (to adjust the result) than for unsigned types. There are other little quirks like that. But in any case, you won't notice it in practice.
P.S. I don't know why you mention "typecasting" in your question, while the code sample that follows has no typecasting in it.