I'm trying to calculate the similarity (read: Levenshtein distance) of two images, using Python 2.6 and PIL.
I plan to us e the python-levenshtein library for fast comparison.
Main question:
What is a good strategy for comparing images? My idea is something like:
- Convert to RGB (transparent -> white) (or maybe convert to monochrome?)
- Scale up the smaller one to the larger one's size
- Convert each channel (= the only channel, if converted to monochrome) to a sequence (item value = color value of the pixel)
- Calculate the Levenshtein distance between the two sequences
Of course, this will not handle cases like mirrored images, cropped images, etc. But for basic comparison, this should be useful.
Is there a better strategy documented somewhere?
EDIT: Aaron H is right about the speed issue. Calculating Levelshtein takes about forever for images bigger then a few hundred by a few hundred pixels. However, the difference between the results after downscaling to 100x100 and 200x200 is less then 1% in my example, so it might be wise to set up a maximum image size of ~100px or so...
EDIT: Thanks PreludeAndFugue, that question is what I was looking for.
By the way, Levenshtein distance can be optimized it seems, but it is giving me some really bad results, perhaps because of there's lots of redundant elements in the backgrounds. Got to look at some other algorithms.
EIDT: Root mean square deviation and Peak signal-to-noise ration seem to be another two options that are not very hard to implement and are seemingly not very CPU-expensive. However, it seems I'm going to need some kind of a context analysis for recognizing shapes, etc.
Anyway, thanks for all the links, and also for pointing out the direction towards NumPy/SciPy.