views:

762

answers:

3

In MATLAB I have calculated the Fundamental matrix (of two images) using the normalized Eight point algorithm. From that I need to triangulate the corresponding image points in 3D space. From what I understand, to do this I would need the rotation and translation of the image's cameras. The easiest way of course would be calibrate the cameras first then take the images, but this is too constricting for my application as it would require this extra step.

So that leaves me with auto (self) camera calibration. I see mention of bundle adjustment, however in An Invitation to 3D Vision it seems it requires an initial translation and rotation, which makes me think that a calibrated camera is needed or my understanding is falling short.

So my question is how can I automatically extract the rotation/translation so I can reprojected/triangulate the image points into 3D space. Any MATLAB code or pseudocode would be fantastic.

A: 

If your 3D-space can be chosen arbitrarily you could set your first camera matrix as

P = [I | 0]

No translation, no rotation. That would leave you with a coordinate system defined from camera 1. Then it should not be too hard to calibrate the second camera.

kigurai
Would you use an algorithm like Bundle Adjustment from there?
srand
If possible I think I would try to use some sort of calibration object to find the rotation of the second camera given the first ideal camera.Otherwise I guess bundle adjustment would be a good place to start.
kigurai
To perform the calibration some sort of calibration object, like a checkerboard, needs to be used. This still can be done with a non-calibration object but the results aren't as good, unless one can extract a high number of *accurate* correspondences. From further reading, bundle adjustment is an algorithm that improves calibration results once they exist.
srand
P1 = K * [eye(3), [0 0 0]']; P2 = K * [R, t]; P1, P2 are the camera matrices. K is the calibration matrix. (Matlab notation)
srand
+2  A: 

Peter's matlab code would be much helpful to you I think :

http://www.csse.uwa.edu.au/~pk/research/matlabfns/

Peter has posted a number of fundamental matrix solutions. The original algorithms were mentioned in the zisserman book

http://www.amazon.com/exec/obidos/tg/detail/-/0521540518/qid=1126195435/sr=8-1/ref=pd_bbs_1/103-8055115-0657421?v=glance&s=books&n=507846

Also, while you are at it don't forget to see the fundamental matrix song :

http://danielwedge.com/fmatrix/

one fine composition in my honest opinion!

Egon
+2  A: 

You can use the fundamental matrix to recover the camera matrices and triangulate the 3D points from their images. However, you must be aware that the reconstruction you will obtain will be a projective reconstruction and not a Euclidean one. This is useful if your goal is to measure projective invariants in the original scene such as the cross ratio, line intersections, etc. but it won't be enough to measure angles and distances (you will have to calibrate the cameras for that).

If you have access to Hartley and Zisserman's textbook, you can check section 9.5.3 where you will find what you need to go from the fundamental matrix to a pair of camera matrices that will allow you to compute a projective reconstruction (I believe the same content appears in section 6.4 of Yi Ma's book). Since the source code for the book's algorithms is available online, you may want to check the functions vgg_P_from_F, vgg_X_from_xP_lin, and vgg_X_from_xP_nonlin.

jmbr
I ended up using Algorithm 12.1 "The optimal triangulation method" in the Hartley/Zisserman book. Calibrated the cameras with "Camera Calibration Toolbox for Matlab"
srand