views:

44

answers:

1

[I might be misunderstanding this entire topic, as I've grown up with languages that allow the dev to almost entirely ignore processor architecture, like Java, except in some particular cases. Please correct me if I have some of the concepts wrong.]

Reading here it seems that the advice is to use CGFloat instead of, say, float, because it future-proofs my code for different processor architectures (64-bit handles float differently). Assuming that is right, then why does UISlider, for instance, use float directly (for the value)? Wouldn't it be wrong (or something) for me to read their float and convert it to a CGFloat, because in any case my code is not right if the architecture changes anyway?

+1  A: 

CGFloat is just a typedef for float. Down the road, it may be more or something different, which is why it future proofs your code. Objective-C does with many types, NSInteger is another example.

Although they can be used interchangeably, I agree that it doesn't appear in the case of UISlider that Apple was dogfooding.

Jason McCreary
+1 so what do you think the implications are for code that reads from and writes to a value on a slider? Just use float?
Yar
I say if it's a `float` use `float` ;) Consistency is what matters here. Ideally it would be `CGFloat` throughout.
Jason McCreary