views:

42

answers:

1

I want to do some fairly complex arithmetics that require very high precision, i.e. calculating

10000000000 + 0.00000000001 = 10000000000.00000000001 10000000000.00000000001 * 3 = 30000000000.00000000003

I want to use NSDecimalNumber for this kind of math, but the problem is: How to feed it with these values?

The documentation says:

- (id)initWithMantissa:(unsigned long long)mantissa exponent:(short)exponent isNegative:(BOOL)flag

The first problem I see is the mantissa. It requires a unsigned long long. As I understand that data type, It is a floating point, right? So if it is, at this point the entered value is already "dirty". It may have unwanted fractional digits somewhere at the end of it. I couldn't find good documentation on "unsigned long long" from apple, but I remember a code snippet where somone feeded the mantissa with a CGFloat, so that's why I assume it's a floating-point type.

Well if it is indeed some super floating point datatype, then the hard question is: How to get a clean, really clean integer into this thing? So clean, that I could multiply it by a half trillion without getting wrong results?

Are there good tutorials on the usage of NSDecimalNumber in practise?

Edit: No problem here! Thanks everyone!

+2  A: 

If you really are concerned about feeding in less precise types, I'd recommend using -initWithString:, -initWithString:locale:, +decimalNumberWithString:, or +decimalNumberWithString:locale:. Using the string description avoids ever having to convert the numerical representation to a floating point or other numerical type before generating your NSDecimalNumber.

Brad Larson