views:

568

answers:

6

I was testing this code from Brainteasers:

        double d1 = 1.000001;

        double d2 = 0.000001;

        Console.WriteLine((d1 - d2) == 1.0);

And the result is "False". When I change the data type:

        decimal d1 = 1.000001M;

        decimal d2 = 0.000001M;

        decimal d3 = d1-d2;

        Console.WriteLine(d3 == 1);

The program writes the correct answer: "True".

This problem just uses 6 digits after the floating point. What happened with the precision of 15 digits?

+17  A: 

This has nothing to do with precision - it has to do with representational rounding errors.

System.Decimal is capable of representing large floating point numbers with a significantly reduced risk of incurring any rounding errors like the one you are seeing. System.Single and System.Double are not capable of this and will round these numbers off and create issues like the one you are seeing in your example.

System.Decimal uses a scaling factor to hold the position of the decimal place thus allowing for exact representation of the given floating-point number, whereas System.Single and System.Double only approximate your value as best they can.

For more information, please see System.Double:

Remember that a floating-point number can only approximate a decimal number, and that the precision of a floating-point number determines how accurately that number approximates a decimal number. By default, a Double value contains 15 decimal digits of precision, although a maximum of 17 digits is maintained internally. The precision of a floating-point number has several consequences:

  • Two floating-point numbers that appear equal for a particular precision might not compare equal because their least significant digits are different.

  • A mathematical or comparison operation that uses a floating-point number might not yield the same result if a decimal number is used because the floating-point number might not exactly approximate the decimal number.

Andrew Hare
http://msdn.microsoft.com/en-us/library/system.decimal.aspx "The Decimal type does not eliminate the need for rounding. Rather, it minimizes errors due to rounding. For example, the following code produces a result of 0.9999999999999999999999999999 rather than 1."
quant_dev
@quant_dev: Fair enough, I have edited my answer to reflect that :)
Andrew Hare
"System.Decimal uses a scaling factor to hold the position of the decimal place thus allowing for exact representation of a given floating-point number" -- this isn't true; for example, 1/3 is not going to be represented exactly neither by doubles (which use a binary base) nor decimals (which use a decimal base). In fact, the only difference between Double and Decimal is the base, numbers of bits reserved for the exponent and significand, and whether the exponent can change sign (yes for Double, no for Decimal). Apart from that, they're similar. System.Decimal doesn't have arbitrary precision!
quant_dev
I don't believe I claim anywhere that `System.Decimal` has arbitrary precision. When I said "allowing for exact representation of a given floating-point number" I really meant _the_ given number as in the number in the example. I will edit my answer to reflect that as well.
Andrew Hare
Yeah, I checked my variable values and I got 0.99999989 for d3 when it was double type
yelinna
+2  A: 

The idea of floating point numbers is that they are not precise to a particular number of digits. If you want that sort of functionality, you should look at the decimal data type.

Adam Robinson
+3  A: 

Avoid comparision on equality for floating point numbers.

The Chairman
Long live Chairman Mao
Adam Luter
Dude! I thought you were dead! Good to see you've found a new hobby. Hope you enjoy programming!
Beska
+2  A: 

The precision isn't absolute, because it's not possible to convert between decimal and binary numbers exactly.

In this case, .1 decimal repeats forever when represented in binary. It converts to .000110011001100110011... and repeats forever. No amount of precision will store that exactly.

Joel Coehoorn
+6  A: 

Generally, the way to check for equality of floating-point values is to check for near-equality, i.e., check for a difference that is close to the smallest value (called epsilon) for that datatype. For example,

if (Math.Abs(d1 - d2) <= Double.Epsilon) ...

This tests to see if the d1 and d2 are represented by the same bit pattern give or take the least significant bit.

See: http://msdn.microsoft.com/en-us/library/system.double.epsilon.aspx

Loadmaster
+1  A: 

The decimal type implements decimal floating point whereas double is binary floating point.

The advantage of decimal is that it behaves as a human would with respect to rounding, and if you initialise it with a decimal value, then that value is stored precisely as you specified. This is only true for decimal numbers of finite length and within the representable range and precision. If you initialised it with say 1.0M/3.0M, then it would not be stored precisely just as you would write 0.33333-recurring on paper.

If you initialise a binary FP value with a decimal, it will be converted from the human readable decimal form, to a binary representation that will seldom be precisely the same value.

The primary purpose of the decimal type is for implementing financial applications, in the .NET implementation it also has a far higher precision than double, however binary FP is directly supported by the hardware so is significantly faster than decimal FP operations.

Note that double is accurate to approximately 15 significant digits not 15 decimal places. d1 is initialised with a 7 significant digit value not 6.

Clifford