views:

3360

answers:

7

I keep seeing people using doubles in C#. I know I read somewhere that doubles sometimes lose precision. My question is when should a use a double and when should I use a decimal type? Which type is suitable for money computations? (ie. greater than $100 million)

+5  A: 

For money : decimal. It cost a little more memory, but doesn't have rounding troubles, like sometime double have.

Clement Herreman
+2  A: 

There is a question on MSDN that has a pretty good explanation of the differences between decimal and double.

Matthew Jones
+46  A: 

For money, always decimal. It's why it was created.

If numbers must add up correctly or balance, use decimal. This includes any financial storage or calculations, scores, or other numbers that people might do by hand.

If the exact value of numbers is not important, use double for speed. This includes graphics, physics or other physical sciences computations where there is already a "number of significant digits".

David
Right that's exactly what I'd answer
Clement Herreman
It's not that double is inaccurate - it has *relative* accuracy and can represent very large or small magnitudes that decimal cannot handle at all.
Michael Borgwardt
A: 

Definately use integer types for your money computations. This cannot be emphasized enough, since at first glance it might seem that a floating point type is adequate.

Here an example in python code:

>>> amount = float(100.00) # one hundred dollars
>>> print amount
100.0
>>> new_amount = amount + 1
>>> print new_amount
101.0
>>> print new_amount - amount
>>> 1.0

looks pretty normal.

Now try this again with 10^20 Zimbabwe dollars

>>> amount = float(1e20)
>>> print amount
1e+20
>>> new_amount = amount + 1
>>> print new_amount
1e+20
>>> print new_amount-amount
0.0

As you can see, the dollar disappeared.

If you use the integer type, it works fine:

>>> amount = int(1e20)
>>> print amount
100000000000000000000
>>> new_amount = amount + 1
>>> print new_amount
100000000000000000001
>>> print new_amount - amount
1
Otto Allmendinger
You don't even need very large/small values to find differences between doubles base2 approximation and actual base 10 values, many small values cannot be accurately stored. Calculate "1 - 0.1 - 0.9" (make sure the compiler doesn't optimize out the equation), and compare it to zero. You'll find that with doubles the result is something like 2e-17 instead of 0 (make sure you run a compare, as many print/ToString functions round off doubles past a certain number of decimal places to remove these types of errors).
David
+2  A: 

Decimal is for exact values. Double is for approximate values.

USD: $12,345.67 USD (Decimal)
CAD: $13,617.27 (Decimal)
Exchange Rate: 1.102932 (Double)
Ian Boyd
+7  A: 

My question is when should a use a double and when should I use a decimal type?

decimal for when you work with values in the range of 10^(+/-28) and where you have expectations about the behaviour based on base 10 representations - basically money.

double for when you need relative accuracy (i.e. losing precision in the trailing digits on large values is not a problem) across wildly different magniutdes - double covers more than 10^(+/-300). Scientific calculations are the best example here.

which type is suitable for money computations?

decimal, decimal, decimal

Accept no substitutes.

The most important factor is that double, being implemented as a binary fraction, cannot accurately represent many decimal fractions (like 0.1) at all, and its overall number of digits is smaller since it is 64bit wide vs. 128bit for decimal. Finally, financial applications often have to follow specific rounding modes (sometimes mandated by law). decimal supports these, double does not.

Michael Borgwardt
A: 

System.Single / float - 7 digits
System.Double / double - 15-16 digits
System.Decimal / decimal - 28-29 significant digits

The way I've been stung by using the wrong type a good few years ago is with large amounts:

  • £520,532.52 - 7 digits
  • £1,323,523.12 - 8 digits

You run out at 1 million for a float.

A 15 digit monetary value:

  • £1,234,567,890,123.45

9 trillion with a double. But with division and comparisons it's more complicated (I'm definitely no expert in floating point and irrational numbers - see Marc's point). Mixing decimals and doubles causes issues:

A mathematical or comparison operation that uses a floating-point number might not yield the same result if a decimal number is used because the floating-point number might not exactly approximate the decimal number.

When should I use double instead of decimal? has some similar and more in depth answers.

Using double instead of decimal for monetary applications is a micro-optimization - that's the simplest way I look at it.

Chris S