views:

219

answers:

2

Hi,

I am trying to stich two images together.
In order to do so I extracted sift features and find matches on the two images using this C implementation.

http://web.engr.oregonstate.edu/~hess/index.html

After that I found the homography matrix using the matched points.

http://www.ics.forth.gr/~lourakis/homest/

But if I use this Homography Matrix in "cvWarpPerspective" function, some of the parts of the image goes out of the viewable area (negative corrdinates).

To solve this I tried to calculate the bounding box first by piping the four corners of the image through Homography matrix. And move the initial image then warp it. But this caused the warping result to change.

Is there any way for warping an image and keeping it in the viewable area?

I would appreciate any help. Thanks in advance...

A: 

I think you're on the right track. You need to account for the image translation that happened when you moved the image.

Another way to is to pad the source image around the edges. Depending on how much the perspective is changing, you may need to pad quite a bit. Also, the padding have to be done before feature matching and warping matrix. Obviously, you will be paying in terms of computation for using a bigger image.

sjchoi
A: 

As an exercise, I've tried the same a while ago and stumbled upon the same problem. I've solved it by calculating the bounding box first, as you have described it and then I wrote my own warping function. Warping is very simple, however you need to do lerp by yourself. As some pixel-wise weighting is required anyways for good results (like multiple pixels from different images might end up on the same output pixel and thus need to be blended), I did not feel bad of abandoning cvWarpPerspective.

zerm