views:

944

answers:

6

I wanted to see if folks were using decimal for financial applications instead of double. I have seen lots of folks using double all over the place with unintended consequences . .

Do you see others making this mistake . . .

+7  A: 

We did unfortunately and we regret it. We had to change all doubles to decimals. Decimals are good for financial applications. You can look at this article A Money type for the CLR:

A convenient, high-performance money structure for the CLR which handles arithmetic operations, currency types, formatting, and careful distribution and rounding without loss.

John
A: 

I have always used Decimal. At least when I had a language that supports it. Otherwise, rounding errors will kill you.

Aheho
+5  A: 

Yes, using float or double for financials is a common mistake, leading to much, much pain. decimal is the most obvious choice in this scenario.

For general knowledge, a good discussion of each is here (float/double) and here (decimal).

Marc Gravell
+2  A: 

This is not as obvious as you may think. I recently had the controller of a large corporation tell me that he wanted his financial reports to match what Excel would generate, which is maintaining calculated results internally at maximum precision and only rounding at the last minute for display purposes. This means that you can't always match the Excel answers by manual calculations using only displayed values. His explanation was that there were multiple algorithms for generating the results, each one doing rounding at a different place using decimal values, therefore potentially generating conflicting answers, but the Excel method always generated the same answer.

I personally think he's wrong, but with so many financial people using Excel without understanding how to use it properly for financial calculations, I'll bet there's a lot of people agreeing with this controller.

I don't want to start a religious war, but I'd love to hear other opinions on this.

Ken Paul
It's pretty common for non-programmers to insist that a program produce the same results as whatever system they're using now. I have worked on software where an approximate table look-up was used rather than a more-accurate direct calculation so that the results would match what was gotten by hand.
Mark Bessey
+1  A: 
pkario
+1  A: 

I've run into this a few times. Many languages have nothing of the sort built in, and to someone who doesn't understand the problem it seems like just another hassle, especially if it looks like it works as intended without it.