Does anyone know what an inverse mapping function is in image processing? The paper I am reading describes image processing functions that "take input coordinates as arguments instead of pixel coordinates, allowing the result to be distorted by an arbitrary inverse mapping function."
An inverse mapping function maps output pixels to the corresponding input area. In other words, the function allows you to compute the distorted area on the input corresponding to a rectangular output pixel.
There are two ways to warp an image [15]. The first, called forward mapping, scans through the source image pixel by pixel, and copies them to the appropriate place in the destination image. The second, reverse mapping, goes through the destination image pixel by pixel, and samples the correct pixel from the source image. The most important feature of inverse mapping is that every pixel in the destination image gets set to something appropriate. In the forward mapping case, some pixels in the destination might not get painted, and would have to be interpolated. We calculate the image deformation as a reverse mapping. The problem can be stated "Which pixel coordinate in the source image do we sample for each pixel in the destination image?"
That's an except from this paper:
http://www.cs.princeton.edu/courses/archive/fall00/cs426/papers/beier92.pdf (pdf)
http://www.hammerhead.com/thad/morph.html (html)
The paper is about morphing, but the discussion of how to do the morphing should clear up the "forward / reverse mapping" issue.