views:

365

answers:

7

Has arbitrary-precision arithmetic affected numerical analysis software?

I feel that most numerical analysis software keeps on using the same floats and doubles.

If I'm right, I'd love to know the reason, as in my opinion there are some calculations that can benefit from the use of arbitrary-precision arithmetic, particularly when it is combined with the use of rational number representation, as been done on the GNU Multi-Precision Library.

If I'm wrong, examples would be nice.

+6  A: 

Arbitrary precision is slow. Very slow. And the moment you use a function that produces an irrational value (such as most trig functions), you lose your arbitrary precision advantage.

So if you don't need, or can't use that precision, why spend all that CPU time on it?

bdonlan
This was my gut feeling too
Draemon
Yup. A lot of numerical analysis is not so much about getting the answer as getting it fast. It's used on some awfully big problems, and slowing it down by an order of magnitude or more isn't seen as feasible.
David Thornley
That you for your answer, I assume speed is one factor, but I suspect that with today's fast computers, and a very optimized library (gmp), it isn't very slow. I believe that on irrational value you don't lose the arbitrary precision advantage, as you can store it as precise as you want it, more precise than on a fixed arbitrary. A better yet more complicated solution to irrational numbers arising from such trig functions can be to use symbolic computation.
Liran Orevi
Symbolic computation isn't all-powerful, and although gmp may be well optimized, there's a world of difference between doing things in the integer unit one base-2^32 digit at a time, and doing things in the hardware floating-point unit.
bdonlan
Also, it is not the rational/irrational numbers which pose the issue, but the algebraic/transcendental distinction. One could easily computesqrt(sqrt(2)), however, sqrt(sqrt(Pi))??? For work on Algebraic full precision, see the page of Chee Yap at Courant. As for the OP. 1) It is slow. 2) In science, we accept finiteprecision, so how and why can we expect better for a computer.
SplittingField
"And the moment you use a function that produces an irrational value (such as most trig functions), you lose your arbitrary precision advantage". Err, no you don't.
Jon Harrop
@Jon Arbitrary precision math generally means representing the non-integer values as a rational of two bigints. It is impossible to represent all irrational numbers in a single format, however. Thus you will lose accuracy once you start working with irrational numbers, as they must be made to fit in a rational or floating-point representation. The only workaround is to do symbolic manipulation instead, but that's fairly limited, and you'll have to fall back to lossy arithmetic when you compute the final answer.
bdonlan
@bdonlan: Arbitrary-precision rationals are just one form of arbitrary-precision arithmetic. A more relevant form in this context is arbitrary-precision float arithmetic. For example, you can evaluate using graph reduction with backpropagation of errors to obtain a result to any finite accuracy. This is not exact arithmetic but the presence of irrational numbers does not "lose the arbitrary precision advantage" in any meaningful sense because you still get the result correct to any arbitrary precision you want.
Jon Harrop
@Jon can this produce arbitrary irrationals, however? It would seem difficult to evaluate a taylor series to infinite precision in order to compute expressions containing sine and friends to infinite precision. Also, do you have some references on this technique?
bdonlan
+1  A: 

If you look at programs like Mathematica, I strongly suspect you'd find that they do not use floats and doubles for their work. If you look at cryptography, you will definitely find that they do not use floats and doubles (but they are mainly working with integers anyway).

It is basically a judgement call. The people who feel that their product will benefit from increased accuracy and precision use extended-precision or arbitrary-precision arithmetic software. Those who don't think the precision is needed won't use it.

Jonathan Leffler
Yes, According to http://en.wikipedia.org/wiki/Arbitrary-precision_arithmetic some computer algebra software do use arbitrary-precision arithmetic (at least to some degree)
Liran Orevi
+1  A: 

Arbitrary precision doesn't work well with irrational values. I think flip everything upside down would help numerical analysis software. Instead of figuring how what precision is needed for the calculation, you should tell the software what you want the final precision to be and it'll figure everything out.

This way it can use a finite precision type just large enough for the calculation.

Pyrolistical
Just to reiterate, arbitrary precision computation can be done for algebraic numbers (such as sqrt(2)). The problem comes in when we want to compute transcendental numbers, such as sqrt(Pi).
SplittingField
A: 

It's very rare that you need an exact answer to a numerical problem - it's almost always the case that you need the result to some given accuracy. It's also the case that operations are most efficient if performed by dedicated hardware. Taken together that means that there is pressure on hardware to provide implementations that have sufficient accuracy for most common problems.

So economic pressure has created an efficient (ie hardware based) solution for the common cases.

andrew cooke
+4  A: 

Has arbitrary-precision arithmetic affected numerical analysis software? I feel that most numerical analysis software keeps on using the same floats and doubles.

There are several unfortunate reasons that arbitrary-precision (ap) is not used more extensively.

  • Lack of support for important features: missing values for NaN/Infinities, no complex numbers or special functions, lack or buggy implementation of rounding modes (round half-even not implemented in GMP), lack of handlers for important events (loss of significant digits, overflow, underflow...ok,this isn't even implemented in most standard libraries). Why is this important ? Because without that you must invest much energy to formulate your problem in arbitrary precision (ever written a complex number library or special functions in ap ?), you can't reproduce your double result because ap lacks the features you need to track the changes.

  • 99,9% of all programmers aren't interested in numerics at all. One of the most asked question here is: "Why is 0.1+0.1 NOT 0.2 ???? HELP !!!" So why should programmers invest time to learn a specific ap implementation and formulate their problem in it ? If your ap results diverge from the double results and you have no knowledge of numerics, how do you find the bug ? Is double precision too inexact ? Has the ap library a bug ? WHAT IS GOING ON ?! Who knows....

  • Many numeric experts who does know how to compute discourage the use of ap. Frustated by the hardware implementations of FP they insist that reproducability is anyway "impossible" to implement and input data has almost always only few significant digits. So they mostly analyze the precision loss and rewrite the critical routines to minimize it.

  • Benchmark addiction. Wow, my computer is FASTER than others. As the other commentators rightly remarked, ap is much slower than hardware supported floating-point datatypes because you must program it with the integer datatypes per hand. One of the imminent dangers of this attitude is that the programmers, totally unaware of the problems, choose solutions who spit out totally impressive nonsense numbers. I am very cautious about GPGPU. Sure, the graphic cards are much, much faster than the processor, but the reason for that is less precision and accuracy. If you use floats (32bit) instead of doubles(64bit), you have much less bits to compute and to transfer. The human eye is very fault-tolerant, so it does not matter if one or two results are off-limits. Heck, as hardware constructor you can use imprecise, badly rounded computations to speed up your computations (which is really ok for graphics). Throw off those pesky subnormal implementation or rounding modes. There is a very good reason why processors aren't so fast as GPUs.

I can recommend William Kahans page link text for some information about the problems in numerics.

Thorsten S.
Thank you for your answer, how can you loss of significant digits, overflow, underflow with arbitrary-precision ?
Liran Orevi
Oh, that's easy. Overflow: Imagine arctangent(1/x) which is 90° or pi/2 for x == 0. double can calculate that by return positive infinity which is a valid argument for arctangent. So you need overflow and infinities in ap to handle these cases. Loss of significant digits happen with algebraic (sqrt !) and transcendental expressions: Having infinitely much digits you must round them. If you subtract numbers which are very near together (e.g. numerical differentation) loss of significant digits sometimes cannot be prevented. exp(-x) can easily cause underfloweven with ap.
Thorsten S.
@Thorsten, Thank you for your very informative reply.
Liran Orevi
+1  A: 

Wolfram Research Institute put a huge amount of effort in getting arbitrary-precision interval arithmetic into the core of Mathematica in a pragmatic way and they did an excellent job. Mathematica will transparently do almost any computation to arbitrary precision.

Jon Harrop
+1  A: 

This paper by Dirk Laurie presents a cautionary tale on the use of variable precision.

J. M.
Thank you @J.M. , an interesting paper indeed. (The jury has spoken)
Liran Orevi