views:

266

answers:

5

While reading through the 70-536 training kit, it states:

The runtime optimizes the performance of 32-bit integer types (Int32), so use those types for counters and other frequently accessed integral variables.

Does this only apply in a 32 bit environment? Does Int64 take over in a 64 bit environment, or is Int32 still the better choice?

A: 

Unless you plan on having the value exceed 2 billion, use an integer value. There is no reason to be using extra space for a percieved performance benefit.

And contrary to what other people on this thread may say, until you measure the benefit of such a small thing as this, it is in only a percieved benefit.

JaredPar
+3  A: 

That's a funny way to put it. The runtime doesn't have much to do with it. The CPU is designed for processing 32-bit integers, which is why they're the most efficient to use.

In a 64-bit environment, it again depends on the CPU. However, on x86 CPU's at least (which, to the best of my knowledge, is the only place .NET runs), 32-bit integers are still the default. The registers have simply been expanded so they can fit a 64-bit value. But 32 is still the default.

So prefer 32-bit integers, even in 64-bit mode.

Edit: "default" is probably not the right word. The CPU just supports a number of instructions, which define which data types it can process, and which it can not. There is no "default" there. However, there is generally a data size that the CPU is designed to process efficiently. And on x86, in 32 and 64-bit mode, that is 32-bit integers. 64-bit values are generally not more expensive, but they do mean longer instructions. I also believe that at least the 64-bit capable Pentium 4's were significantly slower at 64-bit ops, although on recent CPU's, that part shouldn't be an issue. (But the instruction size may still be)

Smaller than 32-bit values are somewhat more surprising. Yes, there is less data to transfer, which is good, but the CPU still grabs 32-byte at a time. Which means it has to mask out part of the value, so these become even slower.

jalf
There's no "default" integral data type from the CPU's standpoint, "typing" is limited by legal register opcodes.
Eduard - Gabriel Munteanu
true. By default I mean the data sizes that the CPU can process most efficiently. Less than 32-bit means more work in load/store operations (because part of the read/write has to be masked out), and larger than 32-bit requires longer instructions (and may, on some CPU's also be slower in itself)
jalf
There is a .NET distribution for x64.
Cheeso
But 32-bit values can still be processed more efficiently. There's nothing .NET can do about that.
jalf
A: 

http://en.wikipedia.org/wiki/64-bit suggests (you might find a more authoritative source, this one is the first one that I found) that Microsoft's "64 bit" offerings use 64-bit pointers with 32-bit integers.

http://www.anandtech.com/guides/viewfaq.aspx?i=112 (and I don't know how trust-worthy it is) says,

In order to keep code bloat to a minimum, AMD actually sets the default data operand size to 32-bits in the 64-bit addressing mode. The motivation is that 64-bit data operands are not likely to be needed and could hurt performance; in those situations where 64-bit data operands are desired, they can be activated using the new REX prefix (woohoo, yet another x86 instruction prefix :)).

ChrisW
I have seen this also
Chris Ballance
In MSVC, long long is a 64-bit integer.
DrJokepu
A: 

A 32-bit CPU handles 32-bit integers faster. A 64-bit one handles 64-bit integers faster; just think about it - you either have to shift bits by 32 bits all the time, or waste 32 bits for each 32 bits, which is essentially the same as using a 64-bit variable without the advantages of a 64-bit variable. An other option would building extra circuitry into the CPU so that shifting would not be necessary but obviously that would increase production costs. This is the same for 32-bit CPUs handling 16-bit or 8-bit variables.

I'm not sure but I wouldn't be surprised if the 64 bit variant of the .NET Framework was a bit more optimized to use longs - but that's just speculation on my part.

DrJokepu
You are asserting that 64-bit-capable CPUs handle 64-bit integers faster than 32-bit integers: but isn't that just speculation on your part?
ChrisW
ChrisW: You did not read my answer carefully. I didn't speculate; I explained that to avoid performance degradation due to misalignment, you have to pad your 32 bit integers with extra 32 bits. This is not speculation, this is a well-known fact.
DrJokepu
+1  A: 

Scott Hanselman posted an article that addresses differences between 32 adn 64 bit managed code to his blog today. To summarize basically only pointers change size, integers are still 32 bits.

You can find the post here.

ScottS