views:

1535

answers:

6

I always tell in c# a variable of type double is not suitable for money. All weird things could happen. But I can't seem to create an example to demonstrate some of these issues. Can anyone provide such an example?

(edit; this post was originally tagged C#; some replies refer to specific details of decimal, which therefore means System.Decimal).

(edit 2: I was specific asking for some c# code, so I don't think this is language agnostic only)

+39  A: 

Very, very unsuitable. Use decimal.

        double x = 3.65, y = 0.05, z = 3.7;
        Console.WriteLine((x + y) == z); // false

(example from Jon's page here - recommended reading ;-p)

Marc Gravell
Darn it, if I'd known I had an example on my own page, I wouldn't have come up with a different one ;)
Jon Skeet
But hey, 2 examples is better than 1...
Marc Gravell
+10  A: 

You will get odd errors effectively caused by rounding. In addition, comparisons with exact values are extremely tricky - you usually need to a apply some sort of epsilon to check for the actual value being "near" a particular one.

Here's a concrete example:

using System;

class Test
{
    static void Main()
    {
        double x = 0.1;
        double y = x + x + x;
        Console.WriteLine(y == 0.3); // Prints False
    }
}
Jon Skeet
+3  A: 

Yes it's unsuitable.

If I remember correctly double has about 17 significant numbers, so normally rounding errors will take place far behind the decimal point. Most financial software uses 4 decimals behind the decimal point, that leaves 13 decimals to work with so the maximum number you can work with for single operations is still very much higher than the USA national debt. But rounding errors will add up over time. If your software runs for a long time you'll eventually start losing cents. Certain operations will make this worse. For example adding large amounts to small amounts will cause a significant loss of precision.

You need fixed point datatypes for money operations, most people don't mind if you lose a cent here and there but accountants aren't like most people..

edit
According to this site http://msdn.microsoft.com/en-us/library/678hzkk9.aspx Doubles actually have 15 to 16 significant digits instead of 17.

@Jon Skeet decimal is more suitable than double because of its higher precision, 28 or 29 significant decimals. That means less chance of accumulated rounding errors becoming significant. Fixed point datatypes (ie integers that represent cents or 100th of a cent like I've seen used) like Boojum mentions are actually better suited.

Mendelt
Note that System.Decimal, the suggested type to use in .NET, is still a floating point type - but it's a floating decimal point rather than a floating binary point. That's more important than having fixed precision in most cases, I suspect.
Jon Skeet
+2  A: 

Since decimal uses a scaling factor of multiples of 10, numbers like 0.1 can be represented exactly. In essence, the decimal type represents this as 1 / 10 ^ 1, whereas a double would represent this as 104857 / 2 ^ 20 (in reality it would be more like really-big-number / 2 ^ 1023).

A decimal can exactly represent any base 10 value with up to 28/29 significant digits (like 0.1). A double can't.

Richard Poole
Decimal doesn't have 96 significant digits. It has 96 significant *bits*. Decimal has around 28 significant digits.
Jon Skeet
In which language are you speaking of the decimal type? Or do all languages that support this type support it in exactly the same way? Might want to specify.
Adam Davis
@Adam - this post originally had the C# tag, so we are talking about System.Decimal specifically.
Marc Gravell
Oops, well spotted Jon! Corrected.Adam, I'm talking C#, as per the question. Do any other languages have a type called decimal?
Richard Poole
@Richard: Well, all languages that is based on .NET does, since System.Decimal is not a unique C# type, it is a .NET type.
awe
@awe - I meant non-.NET languages. I'm ignorantly unaware of any that have a native base 10 floating point type, but I have no doubt they exist.
Richard Poole
A: 

My understanding is that most financial systems express currency using integers -- i.e., counting everything in cents.

IEEE double precision actually can represent all integers exactly in the range -2^53 through +2^53. (Hacker's Delight, pg. 262) If you use only addition, subtraction and multiplication, and keep everything to integers within this range then you should see no loss of precision. I'd be very wary of division or more complex operations, however.

Boojum
If you're only going to use integers though, why not use an integer type to start with?
Jon Skeet
Heh - int64_t can represent all integers exactly in the range -2^63 to +2^63-1. If you use only addition, subtraction and multiplication, and keep everything to integers within this range then you should see no loss of precision. I'd be very wary of division, however.
Steve Jessop
A: 

No a double will always have rounding errors, use "decimal" if you're on .Net...

Thomas Hansen
Careful. *Any* floating-point representation will have rounding errors, decimal included. It's just that decimal will round in ways that are intuitive to humans (and generally appropriate for money), and binary floating point won't. But for non-financial number-crunching, double is often much, much better than decimal, even in C#.
Daniel Pryden