views:

88

answers:

3

I just recently came across the Kahan (or compensated) summation algorithm for minimizing roundoff, and I'd like to know if there are equivalent algorithms for division and/or multiplication, as well as subtraction (if there happens to be one, I know about associativity). Implementation examples in any language, pseudo-code or links would be great!

Thanks

+1  A: 

Subtraction is usually handled via the Kahan method.

For multiplication, there are algorithms to convert a product of two floating-point numbers into a sum of two floating-point numbers without rounding, at which point you can use Kahan summation or some other method, depending on what you need to do next with the product.

If you have FMA (fused multiply-add) available, this can easily be accomplished as follows:

p = a*b;
r = fma(a,b,-p);

After these two operations, if no overflow or underflow occurs, p + r is exactly equal to a * b without rounding. This can also be accomplished without FMA, but it is rather more difficult. If you're interested in these algorithms, you might start by downloading the crlibm documentation, which details several of them.

Division... well, division is best avoided. Division is slow, and compensated division is even slower. You can do it, but it's brutally hard without FMA, and non-trivial with it. Better to design your algorithms to avoid it as much as possible.

Note that all of this becomes a losing battle pretty quickly. There's a very narrow band of situations where these tricks are beneficial--for anything more complicated, it's much better to just use a wider-precision floating point library like mpfr. Unless you're an expert in the field (or want to become one), it's usually best to just learn to use such a library.

Stephen Canon
AFAIK, using wider types is just a bad-aid and will help very little if your problem is ill-conditioned or your algorithm not stable.
Michael Borgwardt
@Michael Borgwardt: that depends on the problem. If your problem is ill-conditioned, then either your input data is exact (in which case increasing precision will actually help once you cross some threshold), or the answer is meaningless anyway (in which case it doesn't matter). If your algorithm is unstable, it depends on the exact nature of the instability. More generally, however, any issue that can be fixed by using compensated arithmetic can also be resolved by using wider types.
Stephen Canon
As curious as I am, as much as I'd love to know everything about everything, I know I'll have to settle on using libraries and the like for now, thanks very much for your answer, I'll take a look at it asap. :)
Geoff
Hmm.. The fact that I don't yet know how to program in assembly (it's on my list of things to learn) rather doesn't help if I want to use FMA (I looked it up), is there an implementation somewhere that I can usee for it? (or a code snippet using the FMA instruction with explanations?)
Geoff
@Geoff: If you're using most commodity hardware (x86, ARM), you don't have an `fma` instruction available anyway, so there's not too much point. If you're using PPC or Itanium or one of the other more exotic architectures that have `fma`, you can generally get at it directly with a compiler builtin in C, or the `fma( )` function defined in `<math.h>`.
Stephen Canon
Note that you can still perform an fma using the `fma( )` function in `<math.h>` on any C99-compliant system, but it will be slow. You're better off using other methods to do compensated arithmetic on such systems (the crlibm documentation is a pretty good reference for this).
Stephen Canon
Of course, neither the implementation of the library I happen to be using right now (msvc9, but with the intent of being portable to linux) or boost support std::fma. sigh. Compensated arithmetic? I did a quick search on that term and didn't come up with much. Also, http://lipforge.ens-lyon.fr/frs/shownotes.php?release_id=123 gives me the impression it won't be of much help for fma... Is there a better search expression I could use that might help me? At worse, I could use C++ (or pseudo)-code for it (haven't found it yet).
Geoff
@Geoff: the crlibm notes don't really go into how to simulate an `fma`, but they discuss a number of procedures that let you work around the lack of one. In particular, you might be interested in **Mul12( )**, which does the same "express a product as a sum without rounding" as I discussed without the use of `fma`.
Stephen Canon
So the listed algorithm on page 15 is suitable as-is then? ( https://lipforge.ens-lyon.fr/frs/download.php/16/crlibm-0.8beta3.pdf )
Geoff
@Geoff: correct, with the caveat that it is only exact in the default round-to-nearest rounding mode.
Stephen Canon
Hmm, well, I guess I could make use of it as long as I add a note that it requires the default rounding mode of round-to-nearest to be correct. Another quick question though: to improve accuracy further, does extending the precision for intermediate computations (eg. a function takes a double and returns a double but operates on long doubles inside the function for temporary variables, only to cast back to double for the return value) always work regardless of circumstances because casting to a smaller type truncates? I noticed this trick could be used with Kahan's summation algorithm.
Geoff
@Geoff: Casting to a smaller floating-point type rounds, it doesn't truncate. Generally speaking, that approach will get you somewhat more accuracy, but there are special circumstances in which it can result in a loss of accuracy because of the additional rounding that occurs when you convert back to double (those cases are rare, however)
Stephen Canon
That makes sense. What kinds of cases might lead to a loss of accuracy due to rounding via conversion? Is there a particular example you could give? How rare are these cases, are they as rare as overflow might be?
Geoff
I forgot to add, is the loss of accuracy more problematic than if I don't extend the precision how I've described?
Geoff
+1  A: 

Designing algorithms to be numerically stable is an academic discipline and field of research in its own right. It's not something you can do (or learn) meaningfully via "cheat sheets" - it requires specific mathematical knowledge and needs to be done for each specific algorithm. If you want to learn how to do this, the reference in the Wikipedia article sounds pretty good: Nicholas J. Higham, Accuracy and Stability of Numerical Algorithms, Society of Industrial and Applied Mathematics, Philadelphia, 1996. ISBN 0-89871-355-2.

A relatively simple way to diagnose the stability of an algorithm is to use interval arithmetic.

Michael Borgwardt
The goal of compensated arithmetic is not typically improving stability. More often it is to satisfy a hard accuracy bound for a stable algorithm.
Stephen Canon
Interesting, thanks for the links I'll take a look at them when I get a chance!
Geoff
@Stephen, Ah. I didn't know there was a difference. :S Still interesting though. :)
Geoff
A: 

You could use bignums and rational fractions rather than floating point numbers in which case you are limited only by the finite availability of memory to hold the require precision.

John
performance may be an issue though
jk
Not an option, the solver is already memory-bound. Thanks though.
Geoff