views:

47

answers:

1

I'm having the doubles imprecision problem with the common mathematical operations. It's the one where a simple addition would append a 00000000x to my double.

I've read from another question that decimals should be used instead.

But the app I'm using this for is statistics related and performance is an issue. I've read somewhere that Decimals degrade performance by a lot in the long run.

Are there any reasonable alternatives/approaches that you could do to avoid imprecision without using the Decimal datatype?

+1  A: 

Round to the desired number of decimal places when you ultimately display or store the result as a string after doing all of your calculations.

For example, use the following for 4 decimal places:

double d = ___;
string s = String.Format(
    System.Globalization.CultureInfo.InvariantCulture,
    "{0:0.0000}", d);
Console.WriteLine(s);

Be aware that this does not mean you will get the same result as if you did it with pencil-and-paper, or with a handheld calculator, or with unlimited precision math. Nor does it mean that the displayed result is correct +/-0.00005 (for the above example): there can be accumulated approximation errors from intermediate calculations.

The following shows how to accomplish something similiar with scientific notation:

string s = String.Format(System.Globalization.CultureInfo.InvariantCulture,
    "{0:0.0000E0}", d);

See http://msdn.microsoft.com/en-us/library/0c899ak8.aspx for more information on custom numeric format strings.

If you are not interested in actually displaying or storing the value as a string, another option is to convert to Decimal and round after performing the performance-critical calculations in double:

Decimal dec = Decimal.Round((Decimal)d, 4);

Note that when exact correspondence with pencil-and-paper methods is required, you should use Decimal for all steps of the calculation, as there are no base-10 to base-2 conversions and the corresponding approximation error associated with such base conversions. That being said, Decimal is not unlimited precision: There are still ways encounter approximation errors, though for the most common use cases (e.g., many kinds of financial calculations) Decimal will "just work".

binarycoder