views:

47

answers:

2

Hi!

Many numerical algorithms tend to run on 32/64bit floating points.

However, what if you had access to lower precision (and less power hungry) co-processors? How can then be utilized in numerical algorithms?

Does anyone know of good books/articles that address these issues?

Thanks!

+1  A: 

Numerical analysis theory uses methods to predict the precision error of operations, independent of the machine they are running on. There are always cases where even on the most advanced processor operations may lose accuracy.

Some books to read about it:

Accuracy and Stability of Numerical Algorithms by N.J. Higham

An Introduction to Numerical Analysis by E. Süli and D. Mayers

If you cant find them or are too lazy to read them tell me and i will try to explain some things to you. (Well im no expert in this because im a Computer Scientist, but i think i can explain you the basics)

I hope you understand what i wrote (my english is not the best).

George B.
A: 

Hi

Most of what you are likely to find will be about doing floating-point arithmetic on computers irrespective of the size of the representation of the numbers themselves. The basic issues surround f-p arithmetic apply whatever the number of bits. Off the top of my head these basic issues will be:

  • range and accuracy of numbers that are represented;
  • careful selection of algorithms which are robust and reliable on f-p numbers rather than on real numbers;
  • the perils and pitfalls of iterative and lengthy calculations in which you run the risk of losing precision and accuracy.

In general, the fewer bits you have the sooner you run into problems, but just as there are algorithms which are useful in 32 bits, there are algorithms which are useful in 8 bits. Sometimes the same algorithm is useful however many bits you use.

As @George suggested, you should probably start with a basic text on numerical analysis, though I think the Higham book is not a basic text.

Regards

Mark

High Performance Mark