views:

90

answers:

2

I'm working on some functionality in a financial application. All numbers are represented as decimals without rounding errors both in the code and in the database. However, I'm having some performance problems and I'm considering switching to floating point numbers (float/double) in my own computations. This is based on the assumption that rounding error isn't a problem (which I will have to check with the customer).

However, I would like to know what pitfals there are if I do this conversion. Are there a type of expression that when computed using floating point numbers may differ significantly from the same expression computed using decimals?

+1  A: 

A financial application is usually the number one example of when not to use floating point. So beware.

Anyway, this article has more than you'll ever want to know about floating point. It's mostly about the D programming language, but it has a lot of generally useful stuff to know.

itsadok
I'm aware that using floating point is somewhat unorthodox, however, some of the computations done in my module is already done using floating point because they are infeasible to compute using decimal. In the end the customer (represented by a mathematician) will have to decide.
Martin Liversage
+3  A: 

The problem with using floats is that you'll end up with (possibly) unexpected precision issues.

If you use float then the max value you can represent properly will be $167,772.16. If you use double then the max value will be better (about $180 trillion).

Have you considered using a straight int or long (32-bit or 64-bit) representing $0.01 intervals? That way you can control the values much better but the performance should be good. You can also wrap them in a struct to make it easier to use with standard math operators.

Aaron