tags:

views:

103

answers:

4

Should you use Int32 in places where you know the value is not going to be higher than 32,767?

I'd like to keep memory down, hoever, using casts everywhere just to perform simple arithmetic is getting annoying.

short a = 1;

short result = a + 1; // Error

short result = (short)(a + 1); // works but looks ugly when does lots of times

What would be better for overall application performance?

+8  A: 

As far as i know it is a good practice to use int whenever possible. Size of int equals to a word size on many architectures, so i think there may be a slight performance degradation when using short in some arithmetical operations.

n535
I looked into this years ago when a 16-bit app had been ported to 32-bit Windows. Turns out (at least on the Intel architecture at the time), 16-bit math was actually slower on their 32-bit processors.So, unless you've got a huge array of them (i.e., the memory savings outweigh the performance losses), just use `int`.
Stephen Cleary
+4  A: 

If you are creating large arrays, then it can save a considerable amount of memory to use narrower types (less bytes), as the size of the array will be "type width" * "number of elements" + "overhead".

However, I'm pretty sure by default that in classes and structs, they will be packed along whole word boundaries e.g. 32bit = 4bytes. A short will still be packed into a 4 byte space.

You can however, manually configure packing in structs\classes by using structure layout:

http://msdn.microsoft.com/en-us/library/system.runtime.interopservices.structlayoutattribute(VS.71).aspx

As with any performance related issue: "don't think - measure".

From an API perspective, it can be majorly annoying to have to keep casting from shorts to ints, etc, as you will find most APIs will use ints for example.

chibacity
What do you make of this statement from MSDN "The fewer objects allocated on the heap, the less work the garbage collector has to do. When you allocate objects, do not use rounded-up values that exceed your needs, such as allocating an array of 32 bytes when you need only 15 bytes." ?
Sir Psycho
The wording is a little odd, but really it's just advocating allocating only what you need (I think they are referring to a byte[] array). Perhaps this is not the best example to get the point across - keeping an eye on the number of objects allocated can be important. A good indicator that you have problems with heap management and garbage collection is to look at the performance counter "% time spent in GC" for your process. If this is high, say 50% or more, then your program is spending most of it's time just garbage collecting.
chibacity
A: 

Unless you're creating 100s of thousands of members then the space savings of a handful of bytes here and there won't really matter on any modern machine. I think the maxim of premature optimization applies here. It might make you feel better, but you're not getting anything particularly measurable out of it. IMO - only optimize memory usage if you're actually using a lot of memory.

Donnie
+1  A: 

The 3 reasons for me to use a integer datatype smaller than int32:

  1. A system with severe memory constraints.
  2. Huge arrays or similar.
  3. I think it would make the purpose of the code easier to read and understand.

I mostly do normal Windows apps, so the the only reason of those that normally matters to me is the 3rd one.

ho1