tags:

views:

237

answers:

5

I have a pretty large formula that has to be calculated about 300 times per second. Most of my variables are float, but some variables are just parameters that go into that formula. In some cases, float is not needed. I could use int parameters. But I wonder if mixing int and float types in an calculation would cause the system to cast types around, instead of saving performance.

+4  A: 

First of all, yes, the system will cast variables so it can do calculations in a common base type. Second, this isn't what you should be most worried about unless you've benchmarked your program and you know that this is your bottleneck.

Just to clarify the first part, though, if you have

double a = 10.0;
int b = 5;
int c = 2;

and your formula is

double d = a + (b / c);

most systems that I know of will use integer math where they can, for the division operation in this case, then switch to floating point math where it's needed. So in short, the decision between integer and floating point math is done at the operator level.

Bill the Lizard
Note that on some CPUs, integer division can actually be /slower/ than fp division. For example, on the Intel Atom, IDIV m32 takes 57 cycles, while FDIV is 25-65. However, those operations are likely on seperate functional units, so they could execute simultaneously.
bdonlan
A: 

I would say that it depends on what is more imporant for you. Performance-wise it is better to use float as by doing it you won't cast the ints number of times (about 300 times per second ;) ).

If it is something that is used very often in your code then you can be concerned about it otherwise there's no need to implement any changes. If you haven't hear of or read "The Fallacy of Premature Optimisation" doing so now may help you. The rule is 20% of code takes 80% of performance.

Artur
+1  A: 

I doubt there are good rules of thumb. It can depend heavily on the processor, the compiler, and the specific combinations of operations you are doing. Set up a timer to run before and after your calculation, and do various tweaks and see which runs faster.

In principle, converting from float to int often would slow you down because an extra instruction is required to do it, but in practice the only way to tell is to measure. Looking at assembler dumps of your various changes can help you get a good intuition, but even then the processor may be pipelining or even reordering instructions in non-obvious ways.

Also, be sure to investigate gcc's floating point optimization switches and ensure you are using the best ones for you.

Drew Hoskins
+2  A: 

Depends ... Is the correctness of the result important to you?

#include <stdio.h>
#include <stdlib.h>

int main(void) {
    int x = 5;
    int y = 2;
    float z = 12.3f;

    float result = z + (x / y);
    printf("%f\n", result);

    result = z + ((float) x) / y;
    printf("%f\n", result);

    return EXIT_SUCCESS;
}

C:\Temp> t
14.300000
14.800000

Do what will give you correct results.

Sinan Ünür
+3  A: 

Be careful about assuming that int operations are always faster than float operations. In isolation they may be, but moving data between a float register and an int register is shockingly slow on modern processors. So once your data is on a float, you should keep it on a float for further computation.

Crashworks