views:

844

answers:

7

I cannot find this in the Apple docs so: what does the "f" after the numbers here indicate? Is this from C or Objective-C? Is there any difference in not adding this to a constant number?

CGRect frame = CGRectMake(0.0f, 0.0f, 320.0f, 50.0f);

Can you explain why I wouldn't just write:

CGRect frame = CGRectMake(0, 0, 320, 50);
+1  A: 

It's a C thing - floating point literals are double precision (double) by default. Adding an f suffix makes them single precision (float).

You can use ints to specify the values here and in this case it will make no difference, but using the correct type is a good habit to get into - consistency is a good thing in general, and if you need to change these values later you'll know at first glance what type they are.

Paul R
+1  A: 

It is almost certainly from C and reflects the desire to use a 'float' rather than a 'double' type. It is similar to suffixes such as L on numbers to indicate they are long integers. You can just use integers and the compiler will auto convert as appropriate (for this specific scenario).

tyranid
+6  A: 
CGRect frame = CGRectMake(0.0f, 0.0f, 320.0f, 50.0f);

uses float constants. (The constant 0.0 usually declares a double in Objective-C; putting an f on the end - 0.0f - declares the constant as a (32-bit) float.)

CGRect frame = CGRectMake(0, 0, 320, 50);

uses ints which will be automatically converted to floats.

In this case, there's no (practical) difference between the two.

Frank Shearar
Theoretically, the compiler may not be smart enough to convert them to float at compile time, and would slow the execution down with four int->float conversions (that are among the slowest casts). Although in this case is almost unimportant, it's always better to specify correctly f if needed: in an expression a constant without the right specifier may force the whole expression to be converted to double, and if it's in a tight loop the performance hit may be noticeable.
Matteo Italia
+2  A: 

From C. It means float literal constant. You can omit both "f" and ".0" and use ints in your example because of implicit conversion of ints to floats.

kemiisto
+6  A: 

Sometimes there is a difference.

float f = 0.3; /* OK, throw away bits to convert 0.3 from double to float */
assert ( f == 0.3 ); /* not OK, f is converted from float to double
   and the value of 0.3 depends on how many bits you use to represent it. */
assert ( f == 0.3f ); /* OK, comparing two floats, although == is finicky. */
Potatoswatter
+1  A: 

A floating point literal in your source code is parsed as a double. Assigning it to a variable that is of type float will lose precision. A lot of precision, you're throwing away 7 significant digits. The "f" postfix let's you tell the compiler: "I know what I'm doing, this is intentional. Don't bug me about it".

The odds of producing a bug isn't that small btw. Many a program has keeled over on an ill-conceived floating point comparison or assuming that 0.1 is exactly representable.

Hans Passant
+2  A: 

When in doubt check the assembler output. For instance write a small, minimal snippet ie like this

#import <Cocoa/Cocoa.h>

void test() {
  CGRect r = CGRectMake(0.0f, 0.0f, 320.0f, 50.0f);
  NSLog(@"%f", r.size.width);
}

Then compile it to assembler with the -S option.

gcc -S test.m

Save the assembler output in the test.s file and remove .0f from the constants and repeat the compile command. Then do a diff of the new test.s and previous one. Think that should show if there are any real differences. I think too many have a vision of what they think the compiler does, but at the end of the day one should know how to verify any theories.

epatel
+1 for "check the assembly output"! Very useful tip for finding out how things work on the metal.
Frank Shearar