views:

47

answers:

0

I am trying to display some decimals in a scientific style. I'm using NSDecimalNumber and NSNumberFormatter objects. When setting the minimum and maximum number of digits though, some information from the original number seems to be lost. For example, consider the following code example:

NSDecimalNumber *dn = [NSDecimalNumber decimalNumberWithDecimal:[[NSNumber numberWithInt:3123400] decimalValue]];
NSNumberFormatter *formatter = [[NSNumberFormatter alloc] init];
[formatter setNumberStyle:NSNumberFormatterScientificStyle];
[formatter setMinimumFractionDigits:2];
[formatter setMaximumFractionDigits:2];

NSString *result = [formatter stringFromNumber:dn];
// Expected: result is 3.12E6
// Actual: result is 3.10E6

[formatter release];

In this case, I would have expected the NSString variable result to equal 3.12E6. The setMinimumFractionDigits and setMaximumFractionDigits calls having locked the final result at 2 decimal places. Unfortunately, the second digit is actually a 0 instead of the expected value of 2.

Increasing the setMinimumFractionDigits and setMaxiumFractionDigits calls to 3, yields a result of 3.120E6. The last digit, in this case the third fraction digit, again remains 0 instead of the expected value of 3.

Why is the last digit 0 in these cases? Is there some setup or configuration of the NSNumberFormatter that I'm missing? I am using Xcode version 3.2.2.