views:

263

answers:

5

I always like to use the variable with the smallest size that will work just fine, But would this really gain me if I used short byte integers instead of integer, and the memory is 32bit word addressable, Does the compiler do something to enhance memory usage ?

+4  A: 

No, int was chosen to be the fastest int type for mopdern 32/64 bit architectures, using shorter (short, sbyte) types will only cost you performance.

You can sometimes save on memory, but only when using large arrays or lists. And even then it usually doesn't pay.

Calculation with 8 bits:

sbyte a, b, c;
a = (sbyte) (b + c);

The typecast is required and carries a runtime cost.

Henk Holterman
Any comment with the down-vote?
Henk Holterman
But we don't run (directly) on x86. C# widens bytes and shorts for every calculation.
Henk Holterman
BTW, I didn't down or up vote you.... I'm merely commenting... but my guess on the downvote is that your answer was not entirely correct on the performance assertions. Also, you can save lots of memory on structures that are not just in arrays or lists. For example, we work on games with large 3D worlds with literally tens or hundreds of thousands of objects loosely connected in a directed graph (one of our games loads about 200,000 objects). Variable sizes there really do matter for performance and memory.
Adisak
@Henk: I stand corrected on the sbyte assertion. It's good to learn something new every day.
Adisak
+1 for the calculation widening explanation.
Adisak
+11  A: 

For local variables, it probably doesn't make much as much sense, but using smaller integers in structures where you have thousands or even millions of items, you can save a considerable amount of memory.

Adisak
FWIW, on a PC in 32-bit mode, 16-bit ints can actually be a tiny bit slower than 32-bit ints because the machine opcodes actually require an extra byte to specify that they use 16-bit operands. So for local variables use the native int type if it works for you. Again, for structures, to save memory, use whatever the smallest size works. You may want to group member variables by like sizes though so you don't waste space with padding between different sized member variables.
Adisak
BTW, here's an article called 'Mastering Structs in C#' describing in depth *EXACTLY* what I was saying about grouping like-sized variables: http://www.vsj.co.uk/articles/display.asp?id=501
Adisak
I liked the link, very informative and I would like to know, How will I have a similar gain with arrays, will it be something like Structures ?
Mohamed Atia
@Mohamed: Yes, if you have large arrays, then using smaller ints will save memory.
Adisak
+3  A: 

If it is a plain variable, nothing is gained by using a shorter width, and some performance may get lost. The compiler will automatically widen storage to a full processor word, so even if you only declare 16 bits, it likely takes 32 bits on the stack. In addition, the compiler may need to perform certain truncation operations in some cases (e.g. when the field is part of a struct); these can cause a slight overhead.

It really only matters for structs and arrays, i.e. if you have many values. For a struct, you may save some memory, at the expense of the overhead I mention above. Plus, you may be forced to use a smaller size if the struct needs to follow some external layout. For an array, memory savings can be relevant if the array is large.

Martin v. Löwis
+3  A: 

Normally, stick with int etc.

In addition to the other answers; there are also cases where you intentionally only want to support the given data-size, because it represents some key truth about the data. This may be key when talking to external systems (in particular interop, but also databases, file-formats, etc), and might be mixed with checked arithmetic - to spot overflows as early as possible.

Marc Gravell
+2  A: 

To be honest memory consumption is probably not the most compelling reason to use small ints (in this example). But there is a general principle at stake that says yes you should use just the memory required for your data structures.

The principle is this, allocate only the width that your data requires and let the compiler find any overflow bugs that may occur, its an additional debugging technique that is very effective. If you know that a value should never exceed a threshold then only allocate up to that threshold.

Tim Jarvis