views:

533

answers:

4

Duplicate

http://stackoverflow.com/questions/803225/when-should-i-use-double-instead-of-decimal

.. and many more...

We use the C# and SQL Server decimal datatypes throughout our apps because of their accuracy. Never had any of those irritating problems where the total doesn't add up to the detail, etc.

I was wondering why double and float exist at all given their inaccuracy

A: 

They are much faster than decimal, and very often you don't need the exact precision.

Joel Coehoorn
and they can be more accurate when you don't care about base 10...
Mehrdad Afshari
i would challenge that assertion; i believe that infinite precision arithmetic is strictly equal to or better than floating-point.
Matt J
@Matt J: SQL Server decimal is not arbitrary precision.
Mehrdad Afshari
It depends on the implementation of 'decimal'. Not all types with the name 'decimal' are infinite precision. Some are just like float or double, but with a LOT more bits. Others mimic decimal math, but aren't infinite (sql, for example), but rather have you specify of arbitrary precision within some limit.
Joel Coehoorn
Infinite precision is a red herring anyway. 1/3 could be stored as an infinitely precise number if we stored the enumerator and denominator as distinct properties, but the only way we can do something like this: log(15)^e would be to store the entire formula. It gets worse when we start operating these formulas against each other. It is practical to approximate... decimals just approximate much better (but perform poorer)
Michael Meadows
@Mehrdad @Joel Coehoorn: Touche; that's what I get for commenting w/o looking it up first ;)
Matt J
A: 

The drawback to the decimal datatype is performance.

This post covers it pretty well:

http://stackoverflow.com/questions/329613/decimal-vs-double-speed

brendan
+1  A: 

"decimal" is 128 bits, double is 64 bits and float is 32 bits. Back in the day, that used to matter.

Decimal is mostly for money transactions (to avoid rounding), the others are good enough for several things where 29 decimals of accuracy doesn't have any real world meaning.

ryansstack
29 significant digits may seem like it has no practical uses, but keep in mind that precision decays with every operation. After doing several mathematical operations, your reliable significant digits may be down to 5. That's when you start at 29. With 7 (float), you can find your way to 1-2 significant digit after a couple operations.
Michael Meadows
+2  A: 

Floating-point arithmetic arose because it is the only way to operate on a large range of non-integer numbers with a reasonable amount of hardware cost. Infinite-precision arithmetic is implemented in several languages (Python, LISP, etc..) and libraries (Java's BigNum, GMP, etc..), and is an alternative for folks who need more accuracy (e.g. the finance industry). For most of the rest of us, who deal with medium-size numbers, floats or certainly doubles are more than accurate enough. The two different floating-point datatypes (corresponding to IEEE 754 single- and double-precision, respectively) because a single-precision floating-point unit has much better area, power, and speed properties than a double-precision unit, and so hardware designers and programmers should make appropriate tradeoffs to exploit these different properties.

Matt J