views:

140

answers:

3

We know that compilers are getting better and better at optimising our code and make it run faster, but my question are there compilers that can optimise floating point operations to ensure greater accuracy.

For example a basic rule is to perform multiplications before addition, this is because multiplication and division using floating point numbers does not introduce inaccuracies as great as that of addition and subtraction but can increase the magnitude of inaccuracies introduced by addition and subtraction, so it should be done first in many cases.

So a floating point operation like

y = x*(a + b); // faster but less accurate

Should be changed to

y = x*a + x*b; // slower but more accurate

Are there any compilers that will optimise for improved floating point accuracy at the expense of speed like I showed above? Or is the main concern of compilers speed with out looking at accuracy of floating point operations?

Thanks

Update: The selected answer, showed a very good example where this type of optimisation would not work, so it wouldn't be possible for the compiler to know before hand what is the more accurate way to evaluate y. Thanks for the counter example.

+8  A: 

Your premise is faulty. x*(a + b), is (in general) no less accurate than x*a + x*b. In fact, it will often be more accurate, because it performs only two floating point operations (and therefore incurs only two rounding errors), whereas the latter performs three operations.

If you know something about the expected distribution of values for x, a, and b a priori, then you could make an informed decision, but compilers almost never have access to that type of information.

That aside, what if the person writing the program actually meant x*(a+b) and specifically wanted the exactly roundings that are caused by that particular sequence of operations? This sort of thing is actually pretty common in high-quality numerical algorithms.

Better to do what the programmer wrote, not what you think he might have intended.

Edit -- An example to illustrate a case where the transformation you suggested results in a catastrophic loss of accuracy: suppose

x = 3.1415926535897931
a = 1.0e15
b = -(1.0e15 - 1.0)

Then, evaluating in double we get:

x*(a + b) = 3.1415926535897931

but

x*a + x*b = 3.0

In fairness, I suppose that certain conservative christians might be happy with this "optimization for accuracy".

Stephen Canon
My premise is not faulty, the first example has two additions and one multiplication while the second has two additions and two multiplications. Multiplications in FP operations introduce no accuracies. That is a fact :) However addition of FP with greatly different magnitudes introduces inaccuracies, so perform the multiplication before after these inaccuracies are introduced increases there magnitude as opposed to doing it before hand.
hhafez
Also your point about "better to do what programmer writes, not what you think he might have intended" is true sometimes but not always, that is the whole premise of optimisation. The compiler does something equivalent but faster, however equivalent is not truely 100% equivalent as we know otherwise it would not be faster, so to say do what the programmer says always, means turn off all optimisation. Turning of optimisation is valid in many cases but as a general rule, that can be disputed.
hhafez
When talking about optimization, we usually define "equivalent" to mean "produces the exact same result". This transformation does not - hence it's not "equivalent".
Anon.
The claim that floating point multiplication never incurs rounding is just blatantly false.
Stephen Canon
+1. The premise is faulty unless you know the expected magnitudes of the variables. More operations means more truncation or rounding (even with multiplication). I'm not sure why hhafez claims there are two additions in each example.
Adrian McCarthy
Your (OP that is) premise is indeed faulty. Multiplication and division of FP numbers do introduce inaccuracy. I think you may be confusing accuracy and precision. If accuracy is very important then you have to resort to measures such as interval arithmetic or use high-precision arithmetic as implemented by various libraries and in Mathematica and its ilk.
High Performance Mark
point taken, but I can give a counter example, I guess what you have demonstrated is that there are no golden rules
hhafez
Yes, there are cases where (for example), `x*a + x*b` delivers the correctly rounded result, while `x*(a + b)` differs by an ulp in the last place. That's why I used the phrase "often be more accurate", not "always be more accurate".
Stephen Canon
@Adrian, Thanks for pointing that out, it is one addition
hhafez
A: 

No, there isn't. Stephen Canon gives some good reasons why this would be a stupid idea, and he's correct; so you won't find a compiler that does this.

If you as the programmer have some knowledge about the ranges of numbers you're manipulating, you can use parentheses, temporary variables and similar constructs to strongly hint the compiler about how you want things done.

Carl Smotricz
A: 

Compilers typically "optimize" for accuracy over speed, accuracy defined as exact implementation of the IEEE 754 standard. Whereas integer operations can be reordered in any way that doesn't cause overflow, FP operations need to be performed exactly as the programmer specifies. This may sacrifice numerical accuracy (ordinary C compilers are not equipped to optimize for that) but faithfully implements the what the programmer asked.

A programmer who is sure he hasn't manually optimized for accuracy may enable compiler features like GCC's -funsafe-math-optimizations and -ffinite-math-only to possibly extract extra speed. But usually there isn't much gain.

Potatoswatter