views:

320

answers:

1

Hi there,

When using an iPhone Objective C method that accepts CGFloats, e.g. [UIColor colorWithRed:green:blue:], is it important to append a f to constant arguments to specifiy them explicitly as floats, e.g. should I always type 0.1f rather than 0.1 in such cases? Or does the compiler automatically cast 0.1 (which is a double in general) to 0.1f (which is a float) at compile time? I don't wish to have these casts happen at run time because they would unneccessarily hog performance.

Thanks in advance

MrMage

+1  A: 

It's not important; it won't break anything to use a double-precision constant where a single-precision constant is expected.

However, if you have turned on the warning about implicit 64-bit-to-32-bit conversions and are building for 32-bit architectures (which I believe includes the iPhone), then you'll want to use single-precision constants simply to avoid getting that warning.

(Alternatively, you could set that setting to explicitly off, with an architecture condition turning it on for 64-bit architectures. But that currently only matters if you're also using some of your code in a Mac application.)

Peter Hosey
I know that nothing breaks, but when does the conversion from double to float occur? At compile time (would be fine) or at run time (then I'd add the f's)?
MrMage
At compile time
Phil Nash
Thank you. Your comment is the real answer to my question. +1
MrMage