The cardinal rule of numerical computing is to avoid subtracting nearly equal numbers. Multiplication and division are always accurate: you lose at most one bit of precision in performing a multiply or divide. But if two numbers agree to n bits, you can lose up to n bits of precision in their subtraction.
There are all kinds of tricks for avoiding such subtractions. For example, suppose you need to calculate exp(x) - 1 for small values of x. (This is something you might do in an interest calculation.) If x is so small that exp(x) equals 1 to all the precision of the computer, then the subtraction will give exactly 0, and the resulting relative error will be 100%. But if you use the Taylor approximation exp(x) - 1 = x + x^2/2 + ... you could get a more accurate answer. For example, exp(10^-17) - 1 will be completely inaccurate, but 10^-17, the one-term Taylor approximation, would be very accurate. This is how functions like expm1
work. See the explanation of log1p
and expm1
here.
If you're concerned about numerical accuracy, you need to understand the anatomy of floating point numbers in order to know what is safe and what is not.