+2  A: 

My maths is incredibly rusty, but you may find the Newton Raphson method gives you good results. In general it converges very quickly on an accurate solution, assuming the iteration begins "sufficiently near" the desired root.

Rich Seller
Newton's method requires sampling the function many times, something he cannot afford to do.
Ron Warholic
However it converges quadratically if the initial guess is not too inaccurate, so one or two iterations is probably sufficient, and the OP said that would be acceptable
Rich Seller
That all looks mighty interesting and tricky. It will take me a while to digest that Wikipedia page. Thanks for the link.
David Rutten
Newton's method requires that you know the derivative of the function you're sampling. Now I gather that, in this case, you do know the derivative... so this would work pretty well I think. It's not too hard to grok, just look at the pictures :)
Thomas
@David you're welcome, if I remember correctly the theory is complicated, but the actual application is quite simple
Rich Seller
I think I can get at the derivative pretty easily. The method seems generic enough to warrant some serious investment.
David Rutten
+3  A: 

Your last picture shows only three points, which only suffice to define a quadratic polynomial, not cubic. For cubic interpolation, you'll need four points. A cubic polynomial can be fitted in different ways; here are two.

The most straightforward way is to simply let the (unique) polynomial pass through all four points.

Another way is to use tangents. Again, we need four points. Let the left two points define a slope. Have the polynomial pass through the second point (in general, it doesn't pass through the first point), and match the computed slope in that point. And same on the right side for the fourth and third point.

By the way, any higher-order polynomial is probably a bad idea, because they tend to become very unstable in the presence of even a little bit of input noise.

If you give some more details about your problem domain, I might be able to give a more specific answer. For example, where do your data points come from, what kind of curve can you generally expect, and can you go back and sample more if required? I can provide equations and pseudo-code too, if needed.

Update: silly me left a sentence referring to two ways without typing them out. Typed them out now.

Thomas
@Thomas: I can sample at every location as often as I want. What I drew is indeed quadratic not cubic, I'm even more confused than I thought :)The values come from a set of electrical point charges, distributed in 2D space. It is essentially a summation of N hyperbolas.
David Rutten
Are you trying to draw a 2D picture with isocharge lines, or whatever those are called? How large is N?
Thomas
Yes I am. N is anywhere between 1 and ~1000.
David Rutten
Depending on the range of influence of each charge, and the amount of accuracy you need, you might be able to get away with a "spatting" technique. Build a big 2-dimensional array of the field strengths at all points (pixels), then add the charges one by one, not iterating over the entire array, but only the (circular) region where the charge's influence is larger than some epsilon.
Thomas
I'm already box-marching my way across the space, so I think I have the number of evaluations down to a minimum. But it's still too slow for a real-time interface (10fps on a large set).
David Rutten
Sounds like a good job for a GPU too.
Thomas
+2  A: 

The magic word is "root solver"; a mathematical root is the value where the function equals zero. By adding/subtracting the threshold, you can use root finders.

If you have a clue what function you are interpolating you can set up a very fast root finder. If you don't have a clue as suggested by your post ("undefined"), the best method is "Brent's method", a combination of "secant method" and "bisection", or the "secant method" alone. Wikipedia has an entry for this method.

Contrary to your opinion it is not a good idea to use more complicated functions. The main performance hurdle are function evaluations which increase with more points/getting the derivative or more complex interpolation functions.

The Newton-Raphson method is VERY BAD if you are near a maximum/minimum/inflection point because the near zero derivative sends you far away from the point and there are some other problems with it. Do not use it until you know what you are doing.

Thorsten S.
@Thorsten, thank you. I'll read up on root-solvers. As I recall we're already using Brent logic elsewhere in our applications so I should be able to poke one of the C++ heads in the office for information.
David Rutten