views:

2665

answers:

9

Hello!

My task is pretty simply (well, it SOUNDS simple): given two different image file (in whatever format I choose), I need to write a program to predict the chance if one is the illegal copy of another. The author of the copy may do stuffs like rotating, make negative, or adding trivial details (as well as changing the dimension of the image).

Do you know any algorithm (AI I guess) to do this kind of job?

Thank a bunch!

+2  A: 

How do you determine which one is the original?

John
Should be a comment.
Akusete
+14  A: 

Read the paper: Porikli, Fatih, Oncel Tuzel, and Peter Meer. “Covariance Tracking Using Model Update Based on Means on Riemannian Manifolds”. (2006) IEEE Computer Vision and Pattern Recognition.

I was successfully able to detect overlapping regions in images captured from adjacent webcams using the technique presented in this paper. My covariance matrix was composed of Sobel, canny and SUSAN aspect/edge detection outputs, as well as the original greyscale pixels.

Nick
The link seems broken.
Satoru.Logic
@Satoru Logic: google search shows hits on the paper: http://www.google.com/search?q=Porikli,+Fatih,+Oncel+Tuzel,+and+Peter+Meer.+%E2%80%9CCovariance+Tracking+Using+Model+Update+Based+on+Means+on+Riemannian+Manifolds%E2%80%9D.+(2006)+IEEE+Computer+Vision+and+Pattern+Recognition.
Nick
+7  A: 

It is indeed much less simple than it seems :-) Nick's suggestion is a good one.

To get started, keep in mind that any worthwhile comparison method will essentially work by converting the images into a different form -- a form which makes it easier to pick similar features out. Usually, this stuff doesn't make for very light reading ...


One of the simplest examples I can think of is simply using the color space of each image. If two images have highly similar color distributions, then you can be reasonably sure that they show the same thing. At least, you can have enough certainty to flag it, or do more testing. Comparing images in color space will also resist things such as rotation, scaling, and some cropping. It won't, of course, resist heavy modification of the image or heavy recoloring (and even a simple hue shift will be somewhat tricky).

http://en.wikipedia.org/wiki/RGB_color_space
http://upvector.com/index.php?section=tutorials&subsection=tutorials/colorspace


Another example involves something called the Hough Transform. This transform essentially decomposes an image into a set of lines. You can then take some of the 'strongest' lines in each image and see if they line up. You can do some extra work to try and compensate for rotation and scaling too -- and in this case, since comparing a few lines is MUCH less computational work than doing the same to entire images -- it won't be so bad.

http://homepages.inf.ed.ac.uk/amos/hough.html
http://rkb.home.cern.ch/rkb/AN16pp/node122.html
http://en.wikipedia.org/wiki/Hough_transform

shea241
+2  A: 

If you're willing to consider a different approach altogether to detecting illegal copies of your images, you could consider watermarking. (from 1.4)

...inserts copyright information into the digital object without the loss of quality. Whenever the copyright of a digital object is in question, this information is extracted to identify the rightful owner. It is also possible to encode the identity of the original buyer along with the identity of the copyright holder, which allows tracing of any unauthorized copies.

While it's also a complex field, there are techniques that allow the watermark information to persist through gross image alteration: (from 1.9)

... any signal transform of reasonable strength cannot remove the watermark. Hence a pirate willing to remove the watermark will not succeed unless they debase the document too much to be of commercial interest.

of course, the faq calls implementing this approach: "...very challenging" but if you succeed with it, you get a high confidence of whether the image is a copy or not, rather than a percentage likelihood.

JeffH
Any more information on how watermarking persists after heavy editing? Sounds very interesting.
Tom Gullen
+3  A: 

This is just a suggestion, it might not work and I'm prepared to be called on this.

This will generate false positives, but hopefully not false negatives.

  1. Resize both of the images so that they are the same size (I assume that the ratios of widths to lengths are the same in both images).

  2. Compress a bitmap of both images with a lossless compression algorithm (e.g. gzip).

  3. Find pairs of files that have similar file sizes. For instance, you could just sort every pair of files you have by how similar the file sizes are and retrieve the top X.

As I said, this will definitely generate false positives, but hopefully not false negatives. You can implement this in five minutes, whereas the Porikil et. al. would probably require extensive work.

Ow01
I like this solution a lot, easy to implement and I beleive it will yield a better than random identification rate
Tom Gullen
This is a question: Does it work if the copy has been saved with a different resolution?
belisarius
+1  A: 

I believe if you're willing to apply the approach to every possible orientation and to negative versions, a good start to image recognition (with good reliability) is to use eigenfaces: http://en.wikipedia.org/wiki/Eigenface

Another idea would be to transform both images into vectors of their components. A good way to do this is to create a vector that operates in x*y dimensions (x being the width of your image and y being the height), with the value for each dimension applying to the (x,y) pixel value. Then run a variant of K-Nearest Neighbours with two categories: match and no match. If it's sufficiently close to the original image it will fit in the match category, if not then it won't.

K Nearest Neighbours(KNN) can be found here, there are other good explanations of it on the web too: http://en.wikipedia.org/wiki/K-nearest_neighbor_algorithm

The benefits of KNN is that the more variants you're comparing to the original image, the more accurate the algorithm becomes. The downside is you need a catalogue of images to train the system first.

Nick Udell
A good idea but only if faces are present in the data. Also it identifies people, not situations. Therfore a professional actor who features in multiple publications would generate a lot of false positives.
Tom Gullen
Unless I misunderstand your intention of use
Tom Gullen
Actually I believe the algorithm works regardless of the subject, so if you were comparing trees it would also be useful. It just happens to be called Eigenfaces because it's classically associated with Facial Recognition.As long as the item to be searched possessed the same overall features as the item you're comparing with it should still work.
Nick Udell
Too long to add to previous comment:Also: Eigenfaces compare the entire image, not just faces on the screen. The examples on wikipedia only use cropped faces because the traditional application is facial recognition, for which only the face is useful. If your actor appeared in different positions, it would be flagged as different.
Nick Udell
Eigenfaces work well on faces because pictures of different facial expressions are very similar, and can be approximated as a low-dimensional linear subspace of the picture space. I can't see how this approximation could work for general affine transformations for any kind of picture.
nikie
My mistake, I must have misunderstood the way eigenfaces worked. I still hold that applying a high-dimensional mapping and K-Nearest Neighbours may help, depending on his specific application.
Nick Udell
I doubt applying KNN *directly* on the raw pixel values would help much, either. Small translations/rotations typically lead to huge differences in the raw pixels values, especially if the picture contains sharp contrasts or thin lines. So arbitrarily transformed versions of the same picture aren't really close to each other in that space (they don't fall into clusters), and KNN won't work very well. I guess it could work well on image histograms or some other transform-invariant representation of the image, though.
nikie
+45  A: 

These are simply ideas I've had thinking about the problem, never tried it but I like thinking about problems like this!

Before you begin

Consider normalising the pictures, if one is a higher resolution than the other, consider the option that one of them is a compressed version of the other, therefore scaling the resolution down might provide more accurate results.

Consider scanning various prospective areas of the image that could represent zoomed portions of the image and various positions and rotations. It starts getting tricky if one of the images are a skewed version of another, these are the sort of limitations you should identify and compromise on.

Matlab is an excellent tool for testing and evaluating images.

Testing the algorithms

You should test (at the minimum) a large human analysed set of test data where matches are known beforehand. If for example in your test data you have 1,000 images where 5% of them match, you now have a reasonably reliable benchmark. An algorithm that finds 10% positives is not as good as one that finds 4% of positives in our test data. However, one algorithm may find all the matches, but also have a large 20% false positive rate, so there are several ways to rate your algorithms.

The test data should attempt to be designed to cover as many types of dynamics as possible that you would expect to find in the real world.

It is important to note that each algorithm to be useful must perform better than random guessing, otherwise it is useless to us!

You can then apply your software into the real world in a controlled way and start to analyse the results it produces. This is the sort of software project which can go on for infinitum, there are always tweaks and improvements you can make, it is important to bear that in mind when designing it as it is easy to fall into the trap of the never ending project.

Colour Buckets

With two pictures, scan each pixel and count the colours. For example you might have the 'buckets':

white
red
blue
green
black

(Obviously you would have a higher resolution of counters). Every time you find a 'red' pixel, you increment the red counter. Each bucket can be representative of spectrum of colours, the higher resolution the more accurate but you should experiment with an acceptable difference rate.

Once you have your totals, compare it to the totals for a second image. You might find that each image has a fairly unique footprint, enough to identify matches.

Edge detection

How about using Edge Detection. alt text

With two similar pictures edge detection should provide you with a usable and fairly reliable unique footprint.

Take both pictures, and apply edge detection. Maybe measure the average thickness of the edges and then calculate the probability the image could be scaled, and rescale if necessary. Below is an example of an applied Gabor Filter (a type of edge detection) in various rotations.

alt text

Compare the pictures pixel for pixel, count the matches and the non matches. If they are within a certain threshold of error, you have a match. Otherwise, you could try reducing the resolution up to a certain point and see if the probability of a match improves.

Regions of Interest

Some images may have distinctive segments/regions of interest. These regions probably contrast highly with the rest of the image, and are a good item to search for in your other images to find matches. Take this image for example:

alt text

The construction worker in blue is a region of interest and can be used as a search object. There are probably several ways you could extract properties/data from this region of interest and use them to search your data set.

If you have more than 2 regions of interest, you can measure the distances between them. Take this simplified example:

alt text

We have 3 clear regions of interest. The distance between region 1 and 2 may be 200 pixels, between 1 and 3 400 pixels, and 2 and 3 200 pixels.

Search other images for similar regions of interest, normalise the distance values and see if you have potential matches. This technique could work well for rotated and scaled images. The more regions of interest you have, the probability of a match increases as each distance measurement matches.

It is important to think about the context of your data set. If for example your data set is modern art, then regions of interest would work quite well, as regions of interest were probably designed to be a fundamental part of the final image. If however you are dealing with images of construction sites, regions of interest may be interpreted by the illegal copier as ugly and may be cropped/edited out liberally. Keep in mind common features of your dataset, and attempt to exploit that knowledge.

Morphing

Morphing two images is the process of turning one image into the other through a set of steps:

alt text

Note, this is different to fading one image into another!

There are many software packages that can morph images. It's traditionaly used as a transitional effect, two images don't morph into something halfway usually, one extreme morphs into the other extreme as the final result.

Why could this be useful? Dependant on the morphing algorithm you use, there may be a relationship between similarity of images, and some parameters of the morphing algorithm.

In a grossly over simplified example, one algorithm might execute faster when there are less changes to be made. We then know there is a higher probability that these two images share properties with each other.

This technique could work well for rotated, distorted, skewed, zoomed, all types of copied images. Again this is just an idea I have had, it's not based on any researched academia as far as I am aware (I haven't look hard though), so it may be a lot of work for you with limited/no results.

Zipping

Ow's answer in this question is excellent, I remember reading about these sort of techniques studying AI. It is quite effective at comparing corpus lexicons.

One interesting optimisation when comparing corpuses is that you can remove words considered to be too common, for example 'The', 'A', 'And' etc. These words dilute our result, we want to work out how different the two corpus are so these can be removed before processing. Perhaps there are similar common signals in images that could be stripped before compression? It might be worth looking into.

Compression ratio is a very quick and reasonably effective way of determining how similar two sets of data are. Reading up about how compression works will give you a good idea why this could be so effective. For a fast to release algorithm this would probably be a good starting point.

Transparency

Again I am unsure how transparency data is stored for certain image types, gif png etc, but this will be extractable and would serve as an effective simplified cut out to compare with your data sets transparency.

Inverting Signals

An image is just a signal. If you play a noise from a speaker, and you play the opposite noise in another speaker in perfect sync at the exact same volume, they cancel each other out.

alt text

Invert on of the images, and add it onto your other image. Scale it/loop positions repetitively until you find a resulting image where enough of the pixels are white (or black? I'll refer to it as a neutral canvas) to provide you with a positive match, or partial match.

However, consider two images that are equal, except one of them has a brighten effect applied to it:

alt text

Inverting one of them, then adding it to the other will not result in a neutral canvas which is what we are aiming for. However, when comparing the pixels from both original images, we can definatly see a clear relationship between the two.

I haven't studied colour for some years now, and am unsure if the colour spectrum is on a linear scale, but if you determined the average factor of colour difference between both pictures, you can use this value to normalise the data before processing with this technique.

Tree Data structures

At first these don't seem to fit for the problem, but I think they could work.

You could think about extracting certain properties of an image (for example colour bins) and generate a huffman tree or similar data structure. You might be able to compare two trees for similarity. This wouldn't work well for photographic data for example with a large spectrum of colour, but cartoons or other reduced colour set images this might work.

This probably wouldn't work, but it's an idea. The trie datastructure is great at storing lexicons, for example a dictionarty. It's a prefix tree. Perhaps it's possible to build an image equivalent of a lexicon, (again I can only think of colours) to construct a trie. If you reduced say a 300x300 image into 5x5 squares, then decompose each 5x5 square into a sequence of colours you could construct a trie from the resulting data. If a 2x2 square contains:

FFFFFF|000000|FDFD44|FFFFFF

We have a fairly unique trie code that extends 24 levels, increasing/decreasing the levels (IE reducing/increasing the size of our sub square) may yield more accurate results.

Comparing trie trees should be reasonably easy, and could possible provide effective results.

More ideas

I stumbled accross an interesting paper breif about classification of satellite imagery, it outlines:

Texture measures considered are: cooccurrence matrices, gray-level differences, texture-tone analysis, features derived from the Fourier spectrum, and Gabor filters. Some Fourier features and some Gabor filters were found to be good choices, in particular when a single frequency band was used for classification.

It may be worth investigating those measurements in more detail, although some of them may not be relevant to your data set.

Other things to consider

There are probably a lot of papers on this sort of thing, so reading some of them should help although they can be very technical. It is an extremely difficult area in computing, with many fruitless hours of work spent by many people attempting to do similar things. Keeping it simple and building upon those ideas would be the best way to go. It should be a reasonably difficult challenge to create an algorithm with a better than random match rate, and to start improving on that really does start to get quite hard to achieve.

Each method would probably need to be tested and tweaked thoroughly, if you have any information about the type of picture you will be checking as well, this would be useful. For example advertisements, many of them would have text in them, so doing text recognition would be an easy and probably very reliable way of finding matches especially when combined with other solutions. As mentioned earlier, attempt to exploit common properties of your data set.

Combining alternative measurements and techniques each that can have a weighted vote (dependant on their effectiveness) would be one way you could create a system that generates more accurate results.

If employing multiple algorithms, as mentioned at the begining of this answer, one may find all the positives but have a false positive rate of 20%, it would be of interest to study the properties/strengths/weaknesses of other algorithms as another algorithm may be effective in eliminating false positives returned from another.

Be careful to not fall into attempting to complete the never ending project, good luck!

Tom Gullen
Awesome response. Kudos for a well thought out and enlightening answer.
Andrew Hubbs
Thank you! I hope to expand on it tommorow, I have some more ideas I'd like to think about and look up.
Tom Gullen
Hi Tom - do you know of any open-source edge detection libraries, pref in java?
Richard
Hi Richard, no sorry, but I'm sure there are some out there. Search on google for "Java Gabor Filters" or "Java Edge Detection" and i'm sure you will come across one or two.
Tom Gullen
you can look at this library, it has java bindings: http://phash.org/
Alexander Kjäll
+2  A: 

You will need to use a watermarking scheme to embed a code into the image. To take a step back, as opposed to some of the low-level approaches (edge detection etc) suggested by some folks, a watermarking method is superior because:

It is resistant to Signal processing attacks ► Signal enhancement – sharpening, contrast, etc. ► Filtering – median, low pass, high pass, etc. ► Additive noise – Gaussian, uniform, etc. ► Lossy compression – JPEG, MPEG, etc.

It is resistant to Geometric attacks ► Affine transforms ► Data reduction – cropping, clipping, etc. ► Random local distortions ► Warping

Do some research on watermarking algorithms and you will be on the right path to solving your problem. ( Note: You can benchmark you method using the STIRMARK dataset. It is an accepted standard for this type of application.

nav
+4  A: 

An idea:

  1. use keypoint detectors to find scale- and transform- invariant descriptors of some points in the image (e.g. SIFT, SURF, GLOH, or LESH).
  2. try to align keypoints with similar descriptors from both images (like in panorama stitching), allow for some image transforms if necessary (e.g. scale & rotate, or elastic stretching).
  3. if many keypoints align well (exists such a transform, that keypoint alignment error is low; or transformation "energy" is low, etc.), you likely have similar images.

Step 2 is not trivial. In particular, you may need to use a smart algorithm to find the most similar keypoint on the other image. Point descriptors are usually very high-dimensional (like a hundred parameters), and there are many points to look through. kd-trees may be useful here, hash lookups don't work well.

Variants:

  • Detect edges or other features instead of points.
jetxee
I think that's the right approach, too. Just a detail: SIFT, SURF, GLOH aren't keypoint detectors. They're keypoint descriptors. Common keypoint detectors are (scale invariant) DoG, Harris or Eigenvalue detectors.
nikie
Thanks nikie. I edited the answer.
jetxee