views:

291

answers:

5

How do you detect the location of an image within a larger image? I have an unmodified copy of the image. This image is then changed to an arbitrary resolution and placed randomly within a much larger image which is of an arbitrary size. No other transformations are conducted on the resulting image. Python code would be ideal, and it would probably require libgd. If you know of a good approach to this problem you'll get a +1.

+2  A: 

http://en.wikipedia.org/wiki/Autocorrelation is my first instinct.

Ofir
FTOP: "This image is then changed to an arbitrary resolution" ... this breaks correlation algorithms. Correlation can deal with amplitude scaling but not axis scaling.
Mike D.
+2  A: 

You probably want cross-correlation. (Autocorrelation is correlating a signal with itself; cross correlating is correlating two different signals.)

What correlation does for you, over simply checking for exact matches, is that it will tell you where the best matches are, and how good they are. Flip side is that, for a 2-D picture, it's something like O(N^3), and it's not that simple an algorithm. But it's magic once you get it to work.

EDIT: Aargh, you specified an arbitrary resize. That's going to break any correlation-based algorithm. Sorry, you're outside my experience now and SO won't let me delete this answer.

Mike D.
You could always try cross correlation at multiple resolutions...
Mark E
He said "arbitrary". Trying this at multiple resolutions (since scale factors are rarely integers) would require an absurd amount of time: you'd have to do it for something like every integer image width and height. I've tried (one-dimensional) cross-correlation with noise-like waveforms (with a strong autocorrelation) with a Doppler shift applied to the input signal. The correlation disappears *fast*; stretch or contract the signal by a couple samples and the correlation peak falls into the noise. The standard approach there is to make a bank of correlators. 1024 of them is not unusual.
Mike D.
+3  A: 

There is a quick and dirty solution, and that's simply sliding a window over the target image and computing some measure of similarity at each location, then picking the location with the highest similarity. Then you compare the similarity to a threshold, if the score is above the threshold, you conclude the image is there and that's the location; if the score is below the threshold, then the image isn't there.

As a similarity measure, you can use normalized correlation or sum of squared differences (aka L2 norm). As people mentioned, this will not deal with scale changes. So you also rescale your original image multiple times and repeat the process above with each scaled version. Depending on the size of your input image and the range of possible scales, this may be good enough, and it's easy to implement.

A proper solution is to use affine invariants. Try looking up "wide-baseline stereo matching", people looked at that problem in that context. The methods that are used are generally something like this:

Preprocessing of the original image

  • Run an "interest point detector". This will find a few points in the image which are easily localizable, e.g. corners. There are many detectors, a detector called "harris-affine" works well and is pretty popular (so implementations probably exist). Another option is to use the Difference-of-Gaussians (DoG) detector, it was developed for SIFT and works well too.
  • At each interest point, extract a small sub-image (e.g. 30x30 pixels)
  • For each sub-image, compute a "descriptor", some representation of the image content in that window. Again, many descriptors exist. Things to look at are how well the descriptor describes the image content (you want two descriptors to match only if they are similar) and how invariant it is (you want it to be the same even after scaling). In your case, I'd recommend using SIFT. It is not as invariant as some other descriptors, but can cope with scale well, and in your case scale is the only thing that changes.

At the end of this stage, you will have a set of descriptors.

Testing (with the new test image).

  • First, you run the same interest point detector as in step 1 and get a set of interest points. You compute the same descriptor for each point, as above. Now you have a set of descriptors for the target image as well.
  • Next, you look for matches. Ideally, to each descriptor from your original image, there will be some pretty similar descriptor in the target image. (Since the target image is larger, there will also be "leftover" descriptors, i.e. points that don't correspond to anything in the original image.) So if enough of the original descriptors match with enough similarity, then you know the target is there. Moreover, since the descriptors are location-specific, you will also know where in the target image the original image is.
+2  A: 

Take a look at Scale-Invariant Feature Transforms; there are many different flavors that may be more or less tailored to the type of images you happen to be working with.

awesomo
+1  A: 

Reduce the colors in both images, turn the edges into vectors, then find similar vectors within both images. This will probably allow you to find both location and scale.

I have absolutely no idea to code this all up though.

Ignacio Vazquez-Abrams