tags:

views:

147

answers:

7

Considering the basic data types like char, int, float, double etc..in any standard language C/C++, Java etc

Is there anything like.."operating on integers are faster than operating on characters".. by operating I mean assignment, arithmetic op/ comparison etc. Are data types slower than one another?

+5  A: 

For almost anything you're doing this has almost no effect, but purely for informational purposes, it is usually fastest to work with data types whose size is machine word size (i.e. 32 bits on x86 and 64-bits on amd64). Additionally, SSE/MMX instructions give you benefits as well if you can group these and work on them at the same time

Paul Betts
+2  A: 

yes , some data types are definitely slower than others. For example , floats are more complicated than int's and thus may incur additional penalties when doing divides and multiplies. It all depends on how your hardware is setup and what kind of instructions it supports.

Data types which is longer than the machine word size will also be slower because it takes more cycles to perform operations.

Andrew Keith
so .. by that you mean there is no difference in operating b/w an int and a char?
EFreak
+5  A: 

Rules for this are a bit like rules for English spelling and/or grammar. The rules are broken at least as often as they're followed.

Just for example, for years "everybody has known" that floating point operations are slower than integers, especially for more complex operations like multiply and divide. In reality, some processors do some integer operations (especially multiplication and division) by converting the operands to floating point, doing the operation in floating point, then converting the result back to an integer. As you'd expect from that, the floating point operation is actually faster (though only a little bit).

Most of the time, however, it doesn't matter much -- in a lot of cases, it's quite reasonable to think of the operations on the processor itself as free, and concern yourself primarily with optimizing your use of bandwidth to memory. Of course, doing that well is often even harder...

Jerry Coffin
I'd -1 for your prescriptivist views on grammar, but this is not LinguisticsOverflow :P Until that comes along, check out: http://tinyurl.com/5vcpnx
asveikau
@asveikau:I'd be prescriptivist if I *cared* about the rules being broken. Knowing that rules exist and are broken requires only a minimal awareness of reality. :-)
Jerry Coffin
@asveikau: I don't see any prescriptivist views on grammar in his answer? To say that there exist rules, and that in practice those rules are often broken, is hardly more than a description.
Thomas Padron-McCarthy
+1 for the last paragraph. In today's world it is all about the memory. In particular locality is very important.
Tom Hawtin - tackline
+1  A: 

depending on what you do, the difference can be quite large, especially when working with floats versus double versus long double.

In modern processors it comes down to simd instructions, which have certain length, most commonly 128 bit. so four float versus two double numbers.

However some processors only have 32 bit simd instructions(PPC) and GPU hardware has a factor of eight performance difference between float and double.

when you add trigonometric , exponential, and square root functions into the mix, float numbers are going to have better performance overall given number of factors.

aaa
A: 

This answer relates to the Java case (only).

The literal answer is that the relative speed of the primitive types and operators depends on your processor hardware and your JVM implementation.

But a better answer is that it usually doesn't make a a lot of difference to performance what representations you use. Indeed, any clever data type optimizations you do to make your code run fast on your current machine / JVM may turn out to be anti-optimizations on a different machine / JVM combination.

In general, it is better to pick a data type that represents your data in a correct and natural way, and leave it to the compiler to sort out the details. However, if you are creating large arrays of a primitive type, it is worth knowing that Java uses compact representations for arrays of boolean, byte and short.

Stephen C
+1  A: 

Almost all of the answers on this page are mostly right. The answer, however, varies wildly depending upon your hardware, language, compiler, and VM (in managed languages like Java). On most CPUs, your best performance will be to do the operations on a data type that fits the native operand size of your CPU. In some cases, some compilers will optimize this for you, however.

On most modern desktop CPUs the difference between floating point and integer operations has become pretty trivial. However, on older hardware and a lot of embedded systems the difference in all of these factors can still be really, really big.

The important thing is to know the specifics of your target architecture and your tools.

Russell Newquist
A: 

Most efficiency questions like this are considered premature optimization or micro optimization. More time should be spent ensuring correctness than about performance.

That being said, the truth is out there. Profile your program and find out where the bottlenecks are. Many times, the bottlenecks have nothing to do with data type size, but rather restrictions on how the data types are used. I increased the performance of a program from 1 hour down to 5 minutes by changing the old method of reading one byte at a time to reading 1 megabyte at a time. Generally, the speed of I/O will outweigh the time spent inside a processor playing with data.

Thomas Matthews