views:

102

answers:

3

I'd like to create a simple 2D map of a room by getting pictures (ceiling) of all directions (360° - e.g. movie frames), recognize the walls by edge detection, delete other unwanted objects, concat the images at the right position (cf. walls, panorama) and finally create the approximate 2D map (looking on it from above). Getting the scale would be another parameter, which might be useful.

I have some own ideas at the moment, by using e.g. the Sobel algorithm, but it would be interesting if somebody out there knows some project or software (GPL,freeware prefered) doing this already, as I'm still looking for some examples, which might help me.

Thanks.

+1  A: 

There are two ways I can see this (kinda) working:

1) Figure out the distance between the wall and the point the photo was taken. The simple, consistent way to do this would be to use a laser or similar external measuring device. If you were to rely on the device alone to do this, you would need to take into account the height of the person taking the photo, the angle of the camera, and of course the lens characteristics of the device itself (ie focal length distortion).

2) Create a real world control object that the device can use as a "baseline". Make an obnoxiously bright red cube that is known to be 10x10x10cm; place it in the corner of a room; take a photo / video of the wall from corner to corner, use image detection to find the wall boundaries and object recognition for the cube; figuring out walls dimensions is simple maths from then on. Depending on the lens, this method may be less sensitive to focal length distortion and the like, but for an accurate reading I would've thought you'd still need to take the lens' characteristics into account.

Good luck with this, it's no small undertaking. Image detection for the walls alone will be a challenge. :)

MatW
A: 

I had a "some what" similar project for an internship and in the end the internship was never finished because we all ran out of time (3 months, and no money for anything).

But anyway in the end what we were going to do was to take a large grid paper then point the camera at it so then we could map out the lens' characteristics and use that to tweak the pictures that came out of the camera.

Secondly we were going to put in a "well known/understood" object next/on each wall and use that to interpolate the size (length/height) of each wall.

Now this method would require more "time" to setup because you would need to place the object on each wall and then take picture of each wall with a bit of overlap so that latter you could be able to snitch it together. Then the user would then tell the program where each "corner/edge" was located so the program would know the "general shape" of the room then it would use the object placed on the wall to interpolate the length/height of each wall.

Now keep in mind this would be an relatively manual process, but with a well designed software/process it could be relatively quick. By this I mean the slowest manual steps would be placing the object on the wall, then taking the "panorama" of the room. Once the pictures has been uploaded into the computer, it could take care of pre-processing, snitching it together, then pop up an picture with some "line tools" in which you could use to tell the program where the corner/etc were and then it would do the calculation/adjustment/sizing/etc...

Pharaun
A: 

It sounds like an interesting problem. Here is a link to a Microsoft research paper that might be relevant.

Photo Tourism: Exploring Photo Collections in 3D

In particular look at the section on Image Based Modelling (3.1).

OlduwanSteve