views:

84

answers:

1

I am looking for papers/algorithms for merging projected textures onto geometry. To be more specific, given a set of fully calibrated cameras/photographs and geometry, how can we define a metric for choosing which photograph should be used to texture a given patch of the geometry.

I can think of a few attributes one may seek minimize including the angle between the surface normal and the camera, the distance of the camera from the surface, as well as minimizing some parameterization of sharpness.

The question is how do these things get combined and are there well established existing solutions?

+1  A: 

I'm certain there are theoretical approaches that may eventually yield results. But I'd like to recommend a more direct method:

If you have a GPU available, and have some DirectX or OpenGL shader experience (or GPU programming experience), it would be relatively straightforward to 'splat' each texture onto the model and check the result.

Use your eyes initially, and build a simple metric that can make a quick judgement that correlates well enough to your eye. (For example, since sharpness is likely to be a desired feature of a good texture, take a 2D FFT of the output and the mapped portion of the input: The mapping with the highest frequency content and the least loss may be your best selection.)

Sometimes the easiest way ('try them all and test', AKA 'brute-force') can be the best, especially if you have some GPU horsepower available. That is, don't try to develop a theoretical predictor of success (which can be a royal pain to develop and debug), but rather generate all possible results and compare them to see which is the best.

After all, even if you do develop an a priori method based on predictions, you'll still have to apply the projection and check the result to ensure it works. And since you need to code that test anyway, why code anything else?

BobC