tags:

views:

20

answers:

1

Hi,

Using stereovision, I am producing depthmaps representing the 3d environment as viewed from a camera. There is one depthmap per "keyframe" associated with a camera position. The goal is to translate those 2d depthmaps into the 3d space (and later merge them to reconstruct the whole environment).

What would be the best (efficient) way to translate those depthmaps in 3d? Each depthmap is 752x480 large so the number of triangles can grow quite fast. I would like an automatic system to manage the level of detail of the objects.

My team use Ogre3d so it would be great to find a solution with it. What I am looking for is very similar to what Terrain do, except I want to be able to put the resulting objects wherever I want (translation, rotation) and I think Terrain can't do that.

I am quite new to Ogre3d so please forgive me if there is a straightforward solution I should know. If another tool than Ogre3d is more appropriate to my problem, I'd be happy to learn about it!

+1  A: 

Not clear what you want to do "merge depahtmap with envirronement" ?

Anyway, in your case, you seems stuck to make them 3d using terrain heightmap techniques. In you case, as the depthmap is screen aligned, use a screen space simple raycasting technique. So you must do a compositor in ogre3D that takes that depth map and transform it on the pixel you want.

Translation and rotation from a depth map may be limited to xy on screen, as like terrain heightmap (you cannot have caves using heightmaps), you do miss a dimension.

Not directly related but might help: in pure screen space, there is a technique "position reconstruction" that help getting object world space positions, but only if you have a load of infos on the camera used to generate the depthmap you're using, for example: http://www.gamerendering.com/2009/12/07/position-reconstruction/

Tuan Kuranes
Thanks a lot for your answer. The links you provided are very interesting. However there is one thing I didn't make clear: the depthmap are computed from a real-world camera, but the reconstructed environment should be explorable in all directions. The result of the operation should be a deformed grid hanging in space, that could be rotated around for example. I am looking for a 3d representation, not only screen rendering so I don't think I can use shaders for that...
Jim
Then you're stuck with heightmap like reconstruction. (you may want to try point cloud, but it often is even slower check http://www.visual-experiments.com/ )3d objects reconstruction using depth maps is a topic of image analysis, so you might want to look there ( opencv library uses like http://sourceforge.net/projects/reconststereo/ )
Tuan Kuranes
Thank you! Thanks to the visual-experiments.com website I have found http://grail.cs.washington.edu/software/cmvs/ which handles a few of the problems I am facing. I have some reading to do now!
Jim