views:

1181

answers:

8

TinEye, the "reverse image search engine", allows you to upload/link to an image and it is able to search through the billion images it has crawled and it will return links to images it has found that are the same image.

However, it isn't a naive checksum or anything related to that. It is often able to find both images of a higher resolution and lower resolution and larger and smaller size than the original image you supply. This is a good use for the service because I often find an image and want the highest resolution version of it possible.

Not only that, but I've had it find images of the same image set, where the people in the image are in a different position but the background largely stays the same.

What type of algorithm could TinEye be using that would allow it to compare an image with others of various sizes and compression ratios and yet still accurately figure out that they are the "same" image or set?

A: 

They may well be doing a Fourier Transform to characterize the complexity of the image, as well as a histogram to characterize the chromatic distribution, paired with a region categorization algorithm to assure that similarly complex and colored images don't get wrongly paired. Don't know if that's what they're using, but it seems like that would do the trick.

McWafflestix
Fourier Transforms would most likely be useless since natual images (i.e. photos) basically have the same frequency content (i.e. same magnitude, the phase differs). The rest sounds reasonable though.
kigurai
+6  A: 

It's probably based on improvements of feature extraction algorithms, taking advantage of features which are scale invariant.

Take a look at

or, if you are REALLY interested, you can shell out some 70 bucks (or at least look at the Google preview) for

Vinko Vrsalovic
+1, SIFT is used quite often.
Igor Krivokon
yep, SIFT, under heavy patent protection though, be aware of that for commercial development.
jilles de wit
+17  A: 

These algorithms are usually fingerprint-based. Fingerprint is a reasonably small data structure, something like a long hash code. However, the goals of fingerprint function are opposite to the goals of hash function. A good hash function should generate very different codes for very similar (but not equal) objects. The fingerprint function should, on contrary, generate the same fingerprint for similar images.

Just to give you an example, this is a (not particularly good) fingerprint function: resize the picture to 32x32 square, normalize and and quantize the colors, reducing the number of colors to something like 256. Then, you have 1024-byte fingerprint for the image. Just keep a table of fingerprint => [list of image URLs]. When you need to look images similar to a given image, just calculate its fingerprint value and find the corresponding image list. Easy.

What is not easy - to be useful in practice, the fingerprint function needs to be robust against crops, affine transforms, contrast changes, etc. Construction of good fingerprint functions is a separate research topic. Quite often they are hand-tuned and uses a lot of heuristics (i.e. use the knowledge about typical photo contents, about image format / additional data in EXIF, etc.)

Another variation is to use more than one fingerprint function, try to apply each of them and combine the results. Actually, it's similar to finding similar texts. Just instead of "bag of words" the image similarity search uses a "bag of fingerprints" and finds how many elements from one bag are the same as elements from another bag. How to make this search efficient is another topic.

Now, regarding the articles/papers. I couldn't find a good article that would give an overview of different methods. Most of the public articles I know discuss specific improvement to specific methods. I could recommend to check these:

"Content Fingerprinting Using Wavelets". This article is about audio fingerprinting using wavelets, but the same method can be adapted for image fingerprinting.

PERMUTATION GROUPING: INTELLIGENT HASH FUNCTION DESIGN FOR AUDIO & IMAGE RETRIEVAL. Info on Locality-Sensitive Hashes.

Bundling Features for Large Scale Partial-Duplicate Web Image Search. A very good article, talks about SIFT and bundling features for efficiency. It also has a nice bibliography at the end

Igor Krivokon
@Igor, good answer! Do you have any links or resources you can provide on fingerprint algorithms other than the one you mentioned in your post?
Simucal
+1. Do you have any reference to relevant papers? I've found a few but I don't know how good they are.
Vinko Vrsalovic
@Simucal: I will update my answer with some links to articles.
Igor Krivokon
@Vinko: actually, the book that you mentioned seems to be good (but I haven't read it).
Igor Krivokon
A: 

Check out this blog post (not mine) for a very understandable description of a very understandable algorithm which seems to get good results for how simple it is. It basically partitions the respective pictures into a very coarse grid, sorts the grid by red:blue and green:blue ratios, and checks whether the sorts were the same. This naturally works for color images only.

The pros most likely get better results using vastly more advanced algorithms. As mentioned in the comments on that blog, a leading approach seems to be wavelets.

John Y
That's an interesting blog post, but I wouldn't take the algorithm used as an advice on the topic.
Igor Krivokon
+4  A: 

The Hough Transform is a very old feature extraction algorithm, that you mind find interesting. I doubt it's what tinyeye uses, but it's a good, simple starting place for learning about feature extraction.

There are also slides to a neat talk from some University of Toronto folks about their work at astrometry.net. They developed an algorithm for matching telescoping images of the night sky to locations in star catalogs in order to identify the features in the image. It's a more specific problem than what tinyeye tries to solve, but I'd expect that a lot of the basic ideas that they talk about are applicable to the more general problem.

Boojum
A: 

What about resizing the pictures to a standard small size and checking for SSIM or luma-only PSNR values? that's what I would do.

Camilo Martin
+2  A: 

http://tineye.com/faq#how

Based on this, Igor Krivokon's answer seems to be on the mark.

j_random_hacker
Why the downvote? The question was "What algorithm does TinEye use?" I just mentioned that the TinEye FAQ backs up Igor's answer.
j_random_hacker
A: 

I go for the SIFT algorithm. I used it during my graduation project to recognize cars from different angles. However, I was not able to gain result that good !!

Pieter Hoogestijn