Does any one have an idea regarding what sort of algorithm might Google be using to find similar images ?
Currently the Google Image Search provides these filtering options:
- Image size
- Face detection
- Continuous-tone ("Photo") vs. Smooth shading ("Clipart") vs. bitonal("Line drawing")
- Color histogram
These options can be seen in its Image Search Result page.
I don't know about faces, but see at least:
- http://www.incm.cnrs-mrs.fr/LaurentPerrinet/Publications/Perrinet08spie
- http://stackoverflow.com/questions/1927660/compare-two-images-the-python-linux-way
I have heard, that one should use this when comparing images (I mean: make the prob model, calc. the probs, use this):
Or then it might even be one of those PCFG things that MIT people tend to use with robotics stuff. One I read used some sort of PCFG model made of basic shapes (that you can rotate magically) and searched the best match with
I'm not sure this has much to do with image processing. When I ask for "similar images" of the Eiffel tower, I get a bunch of photos of Paris Hilton, and street maps from Paris. Curiously, all of these images have the word "Paris" in the file name.