[I might be misunderstanding this entire topic, as I've grown up with languages that allow the dev to almost entirely ignore processor architecture, like Java, except in some particular cases. Please correct me if I have some of the concepts wrong.]
Reading here it seems that the advice is to use CGFloat instead of, say, float, because it future-proofs my code for different processor architectures (64-bit handles float differently). Assuming that is right, then why does UISlider
, for instance, use float
directly (for the value)? Wouldn't it be wrong (or something) for me to read their float
and convert it to a CGFloat
, because in any case my code is not right if the architecture changes anyway?
CGFloat
is just a typedef
for float
. This provides flexibility for CGFloat
to be something different down the road. Which is why using it future-proofs your code. Objective-C does this with many types, NSInteger
is another example.
Although they can be used interchangeably, I agree that it doesn't appear in the case of UISlider
that Apple was dogfooding.