views:

113

answers:

3

Reading the documentation, I would have to do something ugly like this:

NSLocale *usLocale = [[[NSLocale alloc] initWithLocaleIdentifier:@"en_US"] autorelease];
NSDecimalNumber *number = [NSDecimalNumber decimalNumberWithString:@"0.00001" locale:usLocale];

But: Isn't there a nicer way of telling that NSDecimalNumber class to look out for the period (.) instead of anything else? Locales feel so unsafe to me. What if one day a new president thinks to be so cool to change the period symbol from . to ,? Then my app crashes. Sure this won't happen so likely. But there is a little chance it could. Enough to scare me using a locale for this ;-)

Any idea? Or is that my only option? And if it is the only option: Is the above approach correct? Or is there even a better one?

And: If I get that right, the NSDecimalNumber object will read that string, parse it and create the mantissa, exponent and isNegative values out of it for internal use, right?

EDIT: I should have mentioned that I add those values programmatically / manually (in my code), they're not user input.

+2  A: 

Locales are a Good Thing; if the president were to change the decimal point from . to ,, the en_US locale would be updated by the next release of the OS -- you don't explicitly mention the locale's decimal point in the above code, so you're OK there. That being said you might want to get the system locale instead of specifying en_US explicitly, which will be wrong anywhere outside the US.

If you're worried about the string, it should be coming from a user in which case they'll use their locale-specific decimal point, which should match up. If you're trying to initialize an NSDecimalNumber this way I suppose you could, but I would imagine there are easier ways to skin that cat.

And yes, if you get it right, the result will be an NSDecimalNumber object whose value is equivalent to the string you passed in.

fbrereto
+2  A: 

Specifying the locale is only necessary if you're passing in a string that isn't a valid decimal number in the default locale. If your users are creating the strings, it's highly likely that they'll format them properly for the locale they're using.

Using locales is far safer than not using them. Trying to parse the "." out of "1,50" is not likely to succeed.

And considering that

[NSDecimalNumber decimalNumberWithString:@"0.00001" locale:usLocale]

is described in the docs as

Creates and returns an NSDecimalNumber object whose value is equivalent to that in a given numeric string.

I think it's a safe bet that it creates and returns an NSDecimalNumber whose value is equivalent to that of the string.

Terry Wilcox
+1  A: 

Locales are a Good Thing. Changeable locales applied to static internal data are a Bad Thing, and you're right (if possibly paranoid :D) to be concerned about applying a locale to data you (rather than the user) provides. Here are some solutions that do not rely on applying locales to your internal data.

number = [NSDecimalNumber decimalNumberWithMantissa:1 exponent:-5 isNegative:NO];
number = [NSDecimalNumber decimalNumberWithDecimal:[[NSNumber numberWithDouble:0.00001] decimalValue];

The second would be very easy to turn into a NSDecimalNumber category implementing -decimalNumberWithDouble:. Probably quite useful.

Rob Napier
However, you will be exposing yourself to potential floating point representation problems in the latter implementation, which could negate any benefits of working with NSDecimal or NSDecimalNumber.
Brad Larson
Yep, that's right. That's why I want to switch to that string input technique. What a pitty they did not give the option to specify the char for the floating point ;) i'm gonna stick to the US_us locale thing.
HelloMoon