views:

137

answers:

2

I have a computer vision set up with two cameras. One of this cameras is a time of flight camera. It gives me the depth of the scene at every pixel. The other camera is standard camera giving me a colour image of the scene.

We would like to use the depth information to remove some areas from the colour image. We plan on object, person and hand tracking in the colour image and want to remove far away background pixel with the help of the time of flight camera. It is not sure yet if the cameras can be aligned in a parallel set up.

We could use OpenCv or Matlab for the calculations.

I read a lot about rectification, Epipolargeometry etc but I still have problems to see the steps I have to take to calculate the correspondence for every pixel.

What approach would you use, which functions can be used. In which steps would you divide the problem? Is there a tutorial or sample code available somewhere?

Update We plan on doing an automatic calibration using known markers placed in the scene

A: 

Maybe this article can help you:

http://www.cse.iitb.ac.in/~sharat/icvgip.org/icvgip00/V-53.pdf

Tony
+3  A: 

If you want robust correspondences, you should consider SIFT. There are several implementations in MATLAB - I use the Vedaldi-Fulkerson VL Feat library.

If you really need fast performance (and I think you don't), you should think about using OpenCV's SURF detector.

If you have any other questions, do ask. This other answer of mine might be useful.

PS: By correspondences, I'm assuming you want to find the coordinates of a projection of the same 3D point on both your images - i.e. the coordinates (i,j) of a pixel u_A in Image A and u_B in Image B which is a projection of the same point in 3D.

Jacob
Lol, to that anonymous, downvoting coward out there, *do leave a comment next time, you moron*
Jacob