views:

273

answers:

7

It's clear that one shouldn't use floating precision when working with, say, monetary amounts since the variation in precision leads to inaccuracies when doing calculations with that amount.

That said, what are use cases when that is acceptable? And, what are the general principles one should have in mind when deciding?

+2  A: 

I'm guessing you mean "floating point" here. The answer is, basically, any time the quantities involved are approximate, measured, rather than precise; any time the quantities involved are larger than can be conveniently represented precisely on the underlying machine; any time the need for computational speed overwhelms exact precision; and any time the appropriate precision can be maintained without other complexities.

For more details of this, you really need to read a numerical analysis book.

Charlie Martin
+1  A: 

It's appropriate to use floating point types when dealing with scientific or statistical calculations. These will invariably only have, say, 3-8 significant digits of accuracy.

As to whether to use single or double precision floating point types, this depends on your need for accuracy and how many significant digits you need. Typically though people just end up using doubles unless they have a good reason not to.

For example if you measure distance or weight or any physical quantity like that the number you come up with isn't exact: it has a certain number of significant digits based on the accuracy of your instruments and your measurements.

For calculations involving anything like this, floating point numbers are appropriate.

Also, if you're dealing with irrational numbers floating point types are appropriate (and really your only choice) eg linear algebra where you deal with square roots a lot.

Money is different because you typically need to be exact and every digit is significant.

cletus
Single-precision floating point is about 7 decimal digits of precision, and double-precision floating point is about 16.
tgamblin
So what's your point?
cletus
A: 

From Wikipedia:

Floating-point arithmetic is at its best when it is simply being used to measure real-world quantities over a wide range of scales (such as the orbital period of Io or the mass of the proton), and at its worst when it is expected to model the interactions of quantities expressed as decimal strings that are expected to be exact.

Floating point is fast but inexact. If that is an acceptable trade off, use floating point.

RossFabricant
+1  A: 

I think you should ask the other way around: when should you not use floating point. For most numerical tasks, floating point is the preferred data type, as you can (almost) forget about overflow and other kind of problems typically encountered with integer types.

One way to look at floating point data type is that the precision is independent of the dynamic, that is whether the number is very small of very big (within an acceptable range of course), the number of meaningful digits is approximately the same.

One drawback is that floating point numbers have some surprising properties, like x == x can be False (if x is nan), they do not follow most mathematical rules (distributivity, that is x( y + z) != x*y + x*z). Depending on the values for z, y, and z, this can matters.

David Cournapeau
The example you gave is distributivity. Associativity is (x + y) + z = x + (y + z).
David Thornley
Actually, the property you've written is distributivity, associativity is a(bc)=(ab)c [similarly for addition]
jpalecek
+1 for pointing out the problem inherent in testing for equality with floating point numbers.
mseery
Geez, thank you for pointing this out, I am ashamed.
David Cournapeau
+3  A: 

Floating point numbers should be used for what they were designed for: computations where what you want is a fixed precision, and you only care that your answer is accurate to within a certain tolerance. If you need an exact answer in all cases, you're best using something else.

Here are three domains where you might use floating point:

  1. Scientific Simulations
    Science apps require a lot of number crunching, and often use sophisticated numerical methods to solve systems of differential equations. You're typically talking double-precision floating point here.

  2. Games
    Think of games as a simulation where it's ok to cheat. If the physics is "good enough" to seem real then it's ok for games, and you can make up in user experience what you're missing in terms of accuracy. Games usually use single-precision floating point.

  3. Stats
    Like science apps, statistical methods need a lot of floating point. A lot of the numerical methods are the same; the application domain is just different. You find a lot of statistics and monte carlo simulations in financial applications and in any field where you're analyzing a lot of survey data.

Floating point isn't trivial, and for most business applications you really don't need to know all these subtleties. You're fine just knowing that you can't represent some decimal numbers exactly in floating point, and that you should be sure to use some decimal type for prices and things like that.

If you really want to get into the details and understand all the tradeoffs and pitfalls, check out the classic What Every Programmer Should Know About Floating Point, or pick up a book on Numerical Analysis or Applied Numerical Linear Algebra if you're really adventurous.

tgamblin
+1  A: 

Most real-world quantities are inexact, and typically we know their numeric properties with a lot less precision than a typical floating-point value. In almost all cases, the C types float and double are good enough.

It is necessary to know some of the pitfalls. For example, testing two floating-point numbers for equality is usually not what you want, since all it takes is a single bit of inaccuracy to make the comparison non-equal. tgamblin has provided some good references.

The usual exception is money, which is calculated exactly according to certain conventions that don't translate well to binary representations. Part of this is the constants used: you'll never see a pi% interest rate, or a 22/7% interest rate, but you might well see a 3.14% interest rate. In other words, the numbers used are typically expressed in exact decimal fractions, not all of which are exact binary fractions. Further, the rounding in calculations is governed by conventions that also don't translate well into binary. This makes it extremely difficult to precisely duplicate financial calculations with standard floating point, and therefore people use other methods for them.

David Thornley
+1  A: 

Short story is that if you need exact calculations, DO NOT USE floating point.

Don't use floating point numbers as loop indices: Don't get caught doing:

for ( d = 0.1; d < 1.0; d+=0.1) 
{  /* Some Code... */ }

You will be surprised.

Don't use floating point numbers as keys to any sort of map because you can never count on equality behaving like you may expect.

Lou