views:

48

answers:

1

Hi, I have met several cases where people computing reciprocal of a number with very small absolute value. They say the result should be upper bounded, since the reciprocal is very big.

(1) I wonder about the reason why is that?

e.g. in page 18 of this paper http://www-stat.stanford.edu/~tibs/ftp/boost.ps, the first paragraph, the reciprocal of a probability is computed. The author said "Since this number can get large if p is small, threshold this ratio at zmax" and a upper bound in [2,4] would be fine. I wonder if it is because the precision is huge when the reciprocal is huge, but bounding by a value in [2,4] does not mean the value for the reciprocal is huge?

Another example, which is in my previous post about inverse-distance-weighted interpolation, http://stackoverflow.com/questions/2186301/inverse-distance-weighting-interpolation, do we have to lower bound the distance before taking its reciprocal, or just only deal with the case when the distance is exactly 0?

(2) If the number has absolute value very large so that its reciprocal is very close to 0, do we have to lower bound the reciprocal?

(3) If we indeed have to upper bound the reciprocal of a number, which way is better, lower bounding the number or upper bounding its reciprocal?

Thanks and regards!

+1  A: 

For (3), if you use a hard cutoff, the two approaches are the same.

With regard to (2), that depends entirely on how you're using it. Usually this sort of thing comes up when you're computing some sort of weight which you don't want to diverge to infinity, because that will break your algorithm. Sometimes a weight of zero doesn't matter as much. Sometimes it does. It will vary with how you're using it in your algorithm.

There are two possible answers to your first question (both are correct, but in different circumstances).

The first possibility is that the weight function that the algorithm should really be using isn't a reciprocal at all, but rather something more like a gaussian -- rounded hump with long tails. In some circumstances, a thresholded reciprocal is a good enough cheap approximation.

The second possibility is that the term whose reciprocal is being taken can never be exactly zero in the situation that is being modeled, but is likely to be zero in the algorithm due to floating-point approximation error. This is especially likely when the reciprocal term is the difference of two well-scaled values. In this situation, it makes sense to threshold at the expected approximation error to avoid over-large (or infinity) reciprocals throwing off the algorithm.

Stephen Canon