I'm trying to calculate the similarity (read: Levenshtein distance) of two images, using Python 2.6 and PIL.
I plan to us e the python-levenshtein library for fast comparison.
Main question:
What is a good strategy for comparing images? My idea is something like:
Of course, this will not handle cases like mirrored images, cropped images, etc. But for basic comparison, this should be useful.
Is there a better strategy documented somewhere?
EDIT: Aaron H is right about the speed issue. Calculating Levelshtein takes about forever for images bigger then a few hundred by a few hundred pixels. However, the difference between the results after downscaling to 100x100 and 200x200 is less then 1% in my example, so it might be wise to set up a maximum image size of ~100px or so...
EDIT: Thanks PreludeAndFugue, that question is what I was looking for.
By the way, Levenshtein distance can be optimized it seems, but it is giving me some really bad results, perhaps because of there's lots of redundant elements in the backgrounds. Got to look at some other algorithms.
EIDT: Root mean square deviation and Peak signal-to-noise ration seem to be another two options that are not very hard to implement and are seemingly not very CPU-expensive. However, it seems I'm going to need some kind of a context analysis for recognizing shapes, etc.
Anyway, thanks for all the links, and also for pointing out the direction towards NumPy/SciPy.
Check out imgSeek:
imgSeek is a collection of free open source visual similarity projects. The query (image you are looking for) can be expressed either as a rough sketch painted by the user or as another image you supply (or an image in your collection). The searching algorithm makes use of multiresolution wavelet decomposition of the query and database images.