tags:

views:

375

answers:

5

Over the int type?

A lot of code either uses int with double/floats.

I know there is the likes of .NET mobile versions so byte/short comes into their own but for desktop apps is there any point?

When I did C++ work (Games programming) I was very aware of each data type I was using though I don't have this feeling in C#/Java work.

Would there be any benefit of me using a byte say if I know my loop will never go over the bounds of byte?

+3  A: 

This is a case of "use the right tool for the job". If you're working with something that represents a byte, you use the byte data type. Much code involving byte streams, for example, requires the use of byte arrays. Conversely, if you're just working with arbitrary integers, you use int or long if they'll be bigger than an int can handle.

John Feminella
+3  A: 

lots of reasons to use byte - anything that handles raw binary streams (images, files, serialization code, etc) is going to have to talk in terms of byte[] buffers.

I wouldn't use byte just as a counter, though - the CPU can handle int more efficiently.

With short... well when you have an array of them it might save quite a bit of space, but in general I'd just use int.

Marc Gravell
+5  A: 

A single byte compared to a long won't make a huge difference memory-wise, but when you start having large arrays, these 7 extra bytes will make a big difference.

What's more is that data types help communicate developers' intent much better: when you encounter a byte length; you know for sure that length's range is that of a byte.

Anton Gogolev
minor: `byte` vs `long` is 7 extra bytes, not 3 extra bytes.
Marc Gravell
Whoops, my bad. Even more so, 7 bytes is huge!
Anton Gogolev
+2  A: 

There's a small performance loss when using datatypes that are smaller than the CPU's native word size. When a CPU needs to add two bytes together, it loads them in (32-bit)word sized registers, adds them, adjusts them (cuts off three most significant bytes, calculates carry/overflow) and stores them back in a byte.

That's a lot of work. If you're going to use a variable in a loop, don't make it smaller than the CPU's native word.

These datatypes exist so that code can handle structures that contain them, because of size constraint, or because of legacy API's or what not.

Dave Van den Eynde
+2  A: 

What I think this question is getting at is that 10+ years ago it was common practice to think about what values your variables needed to store and if, for example, you were storing a percentage (0..100) you might use a byte (-128 to 127 signed or 0 to 255 unsigned) as it was adequately large for the job and thus seen as less "wasteful".

These days however such measures are unnecessary. Memory isn't typically that much of a premium and if it were you'd probably be defeated by modern computers aligning things on 32 bit word boundaries (if not 64) anyway.

Unless you're storing arrays of thousands of these things then these kinds of micro-optimizations are (now) an irrelevant distraction.

Frankly I can't remember the last time I didn't use a byte for something other than raw data and I can't think of the last time I used a short for, well, anything.

cletus