views:

359

answers:

6

So I've always been told NEVER to do this, and this time I pose the question to you: why? I'm sure there is a very good reason, I simply do not know what it is. :-P

+19  A: 

Because floats and doubles cannot accurately represent most base 10 real numbers.

This is how an IEEE-754 floating-point number works: it dedicates a bit for the sign, a few bits to store an exponent, and the rest for the actual fraction. This leads to numbers being represented in a form similar to 1.45 * 10^4; except that instead of the base being 10, it's two.

Certain real decimal numbers cannot be represented exactly in base two. For instance, if you store 0.1 inside a double, you'll actually get something like 0.0999999999999999996, and software rounds it to an acceptable value. However, when dealing with money, those tiny digits lose more and more precision as you add them, divide them, multiply them, etc. This makes floats and doubles inadequate for dealing with money.

zneak
Could you please be more specific?
Fran Fitzpatrick
@Fran You will get rounding errors and in some cases where large quantities of currency are being used, interest rate computations can be grossly off
linuxuser27
...most base 10 fractions, that is. For example, 0.1 has no exact binary floating-point representation. So, `1.0 / 10 * 10` may not be the same as 1.0.
Chris Jester-Young
+4  A: 

Floats and doubles are approximate. If you create a BigDecimal and pass a float into the constructor you see what the float actually equals:

groovy:000> new BigDecimal(1.0F)
===> 1
groovy:000> new BigDecimal(1.01F)
===> 1.0099999904632568359375

this probably isn't how you want to represent $1.01.

The problem is that the IEEE spec doesn't have a way to exactly represent all fractions, some of them end up as repeating fractions so you end up with approximation errors. Since accountants like things to come out exactly to the penny, and customers will be annoyed if they pay their bill and after the payment is processed they owe .01 and they get charged a fee or can't close their account, it's better to use exact types like decimal (in C#) or java.lang.BigDecimal in Java.

Nathan Hughes
If account amounts are always rounded off to the nearest penny, and two values that are within half a cent are considered equal, what's the problem? If things aren't always rounded to the penny (so that e.g. an account with $0.24 in it charged a 2% monthly interest rate would show a visible charge of $0.01 every other month) one has to deal with rounding issues no matter how one stores the data.
supercat
@supercat: Suppose the number is off by about 1/100 of a cent. You add together 100 such numbers. Now your total is off by a penny. In financial transactions, that's usually not acceptable. Similarly if you, say, multiply a monthly payment amount by 360 to get the total of all payments over the course of a 30-year mortgage. Etc. Also note that a float or double holds a fixed number of digits, total before and after the "decimal point". (It's really a binary point, not decimal.) So the bigger the scale of the number, the less accuracy on the decimal places.
Jay
@Jay: For most applications, even using dollars as units, rounding to the nearest penny would result in an error of well below 1/100 of a cent. And I won't dispute that it's better to use pennies as units even so. There are interesting questions, though, when it comes to things like daily interest. If someone has a $10 balance on an account which charges 0.049% daily interest, should the interest be rounded to $0.00 daily, or should the interest show up as $0.01 every other day?
supercat
@supercat: "rounding to the nearest penny would result in an error of well below 1/100 of a cent". I don't know what you mean. Rounding what to the nearest penny? What is the definition of error here? RE comment about interest: Sure, any calculation on monetary amounts, or anything else where we will eventually round, must define what the rounding rules are. You might well need to carry extra decimal places in intermediate calculations.
Jay
A: 

See this SO question: Rounding Errors?

adrift
+6  A: 

From Bloch, J., Effective Java, 2nd ed, Item 48:

The float and double types are particularly ill-suited for monetary calculations because it is impossible to represent 0.1 (or any other negative power of ten) as a float or double exactly.

For example, suppose you have $1.03 and you spend 42c. How much money do you have left?

System.out.println(1.03 - .42);

prints out 0.6100000000000001.

The right way to solve this problem is to use BigDecimal, int or long for monetary calculations.

dogbane
+1 for mentioning Bloch
Bill Michell
A: 

What about using a double, and storing the number of pennies, or some particular fraction of a cent? If all transactions involve an integer number of pennies (or whatever fraction is used), the value will be precise. If some transactions involve weird fractions (e.g. interest payments, discounts, etc.) it may not be practical to store the "true" value precisely in any type of container, so one will have to deal with rounding issues. How is 'double' worse than any other type?

supercat
Integers can't be stored exactly in a float or double either, because floats and doubles only have a certain number of digits of precision. For large enough values, (a + 1) == a. Plus, if you're storing the number of fraction of pennies, there's no reason to use a double instead of a int/long
Mike McNertney
For a double, the perfect-integer limit is 2^48. I guess that's not quite enough to handle the U.S. national debt to the penny, but even Bill Gates doesn't have 281,474,976,710,656 pennies. As for why to use a double, not all programming environments support a 64-bit long, and applying things like percentage discounts or interest rates to an integer type will cause problems of its own.
supercat
If you're going to store number of pennies, why not just use an int or a long and forget about float? Before Java came along with its built-in BigDecimal, I used to do monetary calculations with integers and then at output time just add the decimal point before the last two digits.
Jay
+1  A: 

I prefer using Integer or Long to represent currency. BigDecimal junks up the source code too much.

You just have to know that all your values are in cents. Or the lowest value of whatever currency you're using.

Tony Ennis