I am aware that casting ints to floats (and vice versa) is fairly expensive. However, does the compiler automatically do it at compile time for constants in your code? For e.g. is there any difference between
float y = 123;
float x = 1 / y;
and
float y = 123.f;
float x = 1.f / y;
I see some code that does the latter, but I'm not sure if it's for optimization or safety issues (ie just making sure that the divide is floating point even if y happens to be an int).
I'm using gcc (since the answer might be compiler specific.)
Also, any pointers to a list of what the compiler can and cannot optimize in general would be appreciated. Thanks!