views:

180

answers:

7

Yesterday I asked a floating point question, and I have another one. I am doing some computations where I use the results of the math.h (C language) sine, cosine and tangent functions.

One of the developers muttered that you have to be careful of the return values of these functions and I should not make assumptions on the return values of the gcc math functions. I am not trying to start a discussion but I really want to know what I need to watch out for when doing computations with the standard math functions.

x

+2  A: 

You should not assume that the values returned will be consistent to high degrees of precision between different compiler/stdlib versions.

That's about it.

Andrew Grant
This is what I remember being said. So if I want greater accuracy, do I use another math library, and which one?x
Xofo
If you want to ensure that different compilers will provide exactly the same values then you should roll your own sin/cos/tan functions instead of using the stdlib ones provided by the vendor. Or use a third-party library that does this. Naturally these functions cannot call other stdlib routines.Alternatively you could generate a set of trig tables that are stored as a data file and your sin/trig/cos routines would be lookups.I would ask though whether this is really necessary? It's something I've had to do in the past but it's certainly not common.
Andrew Grant
There's very little point rolling your own. Netlib is pretty standard http://www.netlib.org/fdlibm/
Pete Kirkham
Rolling your own would be gratuitous. There are a number of pretty good libms that are freely available, and there's not really any excuse for not using one of them if you want to have a libm rolled into your project. If you need bitwise reproducible results across platforms, your best bet is probably crlibm, which delivers correctly-rounded results subject to some basic assumptions about the compiler it is built with.
Stephen Canon
A: 

Floating point is straightforward. Just always remember that there is a random uncertainty component to all floating point operations and functions. It may actually not be random, but if you treat it as random, you'll succeed. For instance:

a=a/3*3;

This should be treated as if it was:

a=(a/3+randomvalue)*3+randomvalue;

If you want to know the amount of randomness that is applied, you need to dig into each operation/function to find out. Different compilers, parameters etc. will yield different values.

If you want to learn how to do floating point as precise as posible, then it's a study by it's own.

Lars D
A: 

The problem isn't with the standard math functions, so much as the nature of floating point arithmetic.

Very short version: don't compare two floating point numbers for equality, even with obvious, trivial identities like 10 == 10 / 3.0 * 3.0 or tan(x) == sin(x) / cos(x).

David Seiler
A: 

Floating point is straightforward. Just always remember that there is an uncertainty component to all floating point operations and functions. It is usually modelled as being random, even though it usually isn't, but if you treat it as random, you'll succeed in understanding your own code. For instance:

a=a/3*3;

This should be treated as if it was:

a=(a/3+error1)*3+error2;

If you want an estimate of the size of the errors, you need to dig into each operation/function to find out. Different compilers, parameter choice etc. will yield different values. For instance, 0.09-0.089999 on a system with 5 digits precision will yield an error somewhere between -0.000001 and 0.000001. this error is comparable in size with the actual result.

If you want to learn how to do floating point as precise as posible, then it's a study by it's own.

Lars D
Not only is it never random, but your answer really provides no valuable information.
rlbond
This is how it is taught at University, if you study Numerical Analysis, which is basically the mathematics of floating point calculations on computers. Audio engineers wouldn't call it random values, they would call it "Noise", but it's the same idea.My post delivers the information that enables programmers to understand, where the errors happen in floating point code, and how big the error is. By tying the size of the random values to the choice of compilers, the original question is answered.
Lars D
In a IEEE-754 environment, there is nothing random about floating-point. Floating-point is not black magic. The presence of rounding does not imply non-determinism.
Stephen Canon
There is nothing actually random about any particular floating-point calculation. However, the inaccuracies are hard to predict, and since they vary depending on the input it's often hard to say anything useful and deterministic about them. Therefore, it can be useful to model it as random variation. Similarly, while it's almost certainly possible to predict a die roll given enough information, it's generally more useful to consider it random.
David Thornley
If you repeatedly roll a die, you will get different numbers; if you repeatedly add two floating-point numbers, you will get the same result every time. While it is tricky to compute the size of a rounding exactly, it is fairly straightforward to model without resorting to some sort of stochastic model. In particular, the usual backwards error analysis taught in school does not contain any randomness at all.
Stephen Canon
@Stephen and David: If I calculate 2.0/2.1 on a system with 15 digits precision, what is the error in the calculation? I would say some kind of unspecified value between 1e-15 and -1e-15. Maybe it is zero, maybe it is negative, and maybe it is 1e-15. I don't know. To me, it's just a small random number. I could make some investigations and find out, but usually I don't do that. I just assume that there is an error between -1e-15 and 1e15.
Lars D
+1  A: 

You should not expect sin(PI/6) to be equal to cos(PI/3), for example. Nor should you expect asin(sin(x)) to be equal to x, even if x is in the domain for sin. They will be close, but might not be equal.

erikkallen
A: 

you should take care about precision:

  • Structure of a floating-point number
  • are you on 32bits, 64 bits Platform ?
  • you should read IEEE Standard for Binary Floating-Point Arithmetic
  • there are some intersting libraries such GMP, or MPFR.
  • you should learn how Comparing floating-point numbers
  • etc ...
Nadir SOUALEM
A: 

Agreed with all of the responses that say you should not compare for equality. What you can do, however, is check if the numbers are close enough, like so:

if (abs(numberA - numberB) < CLOSE_ENOUGH)
{
  // Equal for all intents and purposes
}

Where CLOSE_ENOUGH is some suitably small floating-point value.

qid