views:

194

answers:

4

I am aware that casting ints to floats (and vice versa) is fairly expensive. However, does the compiler automatically do it at compile time for constants in your code? For e.g. is there any difference between

float y = 123;
float x = 1 / y;

and

float y = 123.f;
float x = 1.f / y;

I see some code that does the latter, but I'm not sure if it's for optimization or safety issues (ie just making sure that the divide is floating point even if y happens to be an int).

I'm using gcc (since the answer might be compiler specific.)

Also, any pointers to a list of what the compiler can and cannot optimize in general would be appreciated. Thanks!

A: 

Yes, it definitely does so, so the two snippets are equivalent. The only thing that matters is the type of the variable you assign to.

sharptooth
If both constants involved are `int`s, the result will be computed as such as well (so `3/2 == 1`), and only then converted to `float`. In this case both code snippets are the same only because in both cases at least one argument is `float`.
Pavel Minaev
+1  A: 

Yes, the compiler will do the conversion automatically. Your two blocks of code are identical.

It is not an optimization. Turning off optimization won't make the compiler include the int-to-float conversion in the executable code, unless it's a very poor-quality implementation.

It's not for safety, either. The compiler never does anything "just in case" an operand happens to be of a different type. The compiler knows the types of everything in your code. If you change the type of a variable, everything that uses that variable gets recompiled anyway; the compiler doesn't try to keep everything else untouched and just update the changed sections.

Rob Kennedy
I meant 'safety' in the sense that the programmer might happen to make y an int instead of a float, but wants a floating point result anyway.
int3
Oh. No, definitely not in that case, then. If the programmer *were* to make `y` an int, then the division would be integer division in the first code block. The meaning of the code would be entirely different. If a compiler assumed the programmer wanted float-point division when both operands are ints, or vice versa, it wouldn't just be a poor-quality implementation; it would be a *wrong* implementation.
Rob Kennedy
I think you misunderstood me.. I meant to say that perhaps programmers just do 1.f as a precaution against their own carelessness (as in TokenMacGuy's comment), but I wasn't sure if it had optimization value as well. But it's okay, you answered my main question anyway.
int3
A: 

The float y=123 and float y = 123.f cases should be the same for most compilers, but float x=1/y and float x=1.f/y will actually generate different results if y is an integer.

It really does depend on the compiler - some might actually store a constant int and convert it each time it gets assigned to a float variable.

Mark Bessey
A: 

There are cases where the compiler casts float to int, e.g.

float f;
if (f > 1) ...

In this case I have had it happen (Visual Studio 2008) that the compiler produced code equivalent to

if (int (f) > 1) ...
karx11erx
Note that this is a very particular case, where the optimizer could determine that `(int(f) >= 1)` is logically the same as `(f >= 1.0f)` but faster. You're probably misremembering the exact operator: in your code `float f = 1.1;` a clear example where `(f > 1)` but `(int(f) == 1)`
MSalters
I had exactly cases the compiler created a condition expression for (f == 1) that returned true although f > 1.0f. I know this for sure because it gave me bloody headaches until I finally figured what was going wrong. Never trust automatic type conversions.
karx11erx