views:

326

answers:

3

One error I stumble upon every few month is this one:

        double x = 19.08;
        double y = 2.01;
        double result = 21.09;

        if (x + y == result)
        {
            MessageBox.Show("x equals y");
        }
        else
        {
            MessageBox.Show("that shouldn't happen!");  // <-- this code fires
        }

You would suppose the code to display "x equals y" but that's not the case.
The short explanation is that the decimal places are, represented as a binary digit, do not fit into double.

Example: 2.625 would look like:

10.101

because

1-------0-------1---------0----------1  
1 * 2 + 0 * 1 + 1 * 0.5 + 0 * 0.25 + 1 * 0,125 = 2.65

And some values (like the result of 19.08 plus 2.01) cannot be be represented with the bits of a double.

One solution is to use a constant:

        double x = 19.08;
        double y = 2.01;
        double result = 21.09;
        double EPSILON = 10E-10;

        if ( x + y - result < EPSILON )
        {
            MessageBox.Show("x equals y"); // <-- this code fires
        }
        else
        {
            MessageBox.Show("that shouldn't happen!");
        }

If I use decimal instead of double in the first example, the result is "x equals y".
But I'm asking myself If this is because of "decimal" type is not vulnerable of this behaviour or it just works in this case because the values "fit" into 128 bit.

Maybe someone has a better solution than using a constant?

Btw. this is not a dotNet/C# problem, it happens in most programming languages I think.

+4  A: 

Decimal will be accurate so long as you stay within values which are naturally decimals in an appropriate range. So if you just add and subtract, for example, without doing anything which would skew the range of digits required too much (adding a very very big number to a very very small number) you will end up with easily comparable results. Multiplication is likely to be okay too, but I suspect it's easier to get inaccuracies with it.

As soon as you start dividing, that's where the problems can come - particularly if you start dividing by numbers which include prime factors other than 2 or 5.

Bottom line: it's safe in certain situations, but you really need to have a good handle on exactly what operations you'll be performing.

Note that it's not the 128-bitness of decimal which is helping you here - it's the representation of numbers as floating decimal point values rather than floating binary point values. See my articles on .NET binary floating point and decimal floating point for more information.

Jon Skeet
Thanks, but the problem does not only occur with divide. During the transformation decimal -> binary -> decimal it can happen that a decimal with 2 decimal places would convert to an large (even infinitive?) number of binary "decimal" places which are dismissed if the amount is larger then 128 bit. If you convert this "rounded" binary back to decimal you have a different result.
SchlaWiener
@SchlaWiener: If you're converting between double and decimal, you'll certainly have a problem. Don't do that. Stick with one format or the other.
Jon Skeet
I dont' want to convert. Regarding your artikel "decimal floating point" you mention that decimal is "10" based. Does that mean that a + b = c always returns true if a plus b really is c?
SchlaWiener
That entirely depends on what you mean by "if a plus b really is c". In some situations the result will be rounded, in some situations it won't be. If you could give concrete examples, we could answer more definitively.
Jon Skeet
A: 

System.Decimal is just a floating point number with a different base so, in theory, it is still vulnerable to the sort of error you point out. I think you just happened on a case where rounding doesn't happen. More information here.

Steve Rowe
Indeed, it's just that most amounts that people deal with are base 10 and the decimal type can represent those amounts exactly (within the precision limits). Thus the decimal type is ideal for real world usage such as dealing with sums of money.
locster
A: 

Yes, the .NET System.Double structure is subject to the problem you describe.

from http://msdn.microsoft.com/en-us/library/system.double.epsilon.aspx:

Two apparently equivalent floating-point numbers might not compare equal because of differences in their least significant digits. For example, the C# expression, (double)1/3 == (double)0.33333, does not compare equal because the division operation on the left side has maximum precision while the constant on the right side is precise only to the specified digits. If you create a custom algorithm that determines whether two floating-point numbers can be considered equal, you must use a value that is greater than the Epsilon constant to establish the acceptable absolute margin of difference for the two values to be considered equal. (Typically, that margin of difference is many times greater than Epsilon.)

Michael Burr