views:

478

answers:

5

Task: I have a camera mounted on the end of our assembly line, which captures images of produced items. Let's for example say, that we produce tickets (with some text and pictures on them). So every produced ticket is photographed and saved to disk as image. Now I would like to check these saved images for anomalies (i.e. compare them to an image (a template), which is OK). So if there is a problem with a ticket on our assembly line (missing picture, a stain,...), my application should find it (because its image differs too much from my template).

Question: What is the easiest way to compare pictures and find differences between them? Do I need to write my own methods, or can I use existing ones? It would be great if I just set a tolerance value (i.e. images can differ for 1%), put both images in a function and get a return value of true or false :)

Tools: C# or VB.NET, Emgu.CV (.NET wrapper for OpenCV) or something similar

+1  A: 

I don't know the details but I do know that in industrial situations where a high throughput is essential this is sometimes done using neural nets. They turn millions of bits (camera pixels) into 1 (good or bad). Maybe this will help you on your search.

Rene
+2  A: 

I don't Know much about OpenCV, but a bit on image processing.

The way to go depends on the frequency in that new pictures are taken. A simplistic approach would be to calculate a difference picture of you 'good' template and the image of your actual product.

If the images are 100% identical, your resulting image should be empty. If there are residual pixels, you could count these and take them as a measure of deviation from the norm.

However, you will have to match the orientation (and probably the scale) of one of the images to align there borders, otherwise this approach will not work.

If you have timng constraints, you might want to reduce the information in your images prior to processing them (using for example an edge detection and/or convert them to grayscale or even monochromatic bitmap if your product's features are significant enough)

sum1stolemyname
+1  A: 

There are surely applications and libraries out there that already do what you are attempting to do, but I don't know offhand of any. Obviously, one could hash the two images and compare but that expects things to be identical and doesn't leave any leeway for light differences or things like that.

Assuming that you had controlled for the objects in the images being oriented identically and positioned identically, one thing you could do is march through the pixels of each image, and get the HSV values of each like so:

Color color1 = Image1.GetPixel(i,j);
Color color2 = Image2.GetPIxel(i,j);
float hue1 =    color1.GetHue();
float sat1 =    color1.GetSaturation();
float bright1 = color1.GetBrightness();
float hue2 =    color2.GetHue();
float sat2 =    color2.GetSaturation();
float bright2 = color2.GetBrightness();

and do some comparisons with those values. That would allow you to compare them, I think, with more reliability than using the RGB values, particularly since you want to include some tolerances in your comparison.


Edit:

Just for fun, I wrote a little sample app that used my idea above. Essentially it totaled up the number of pixels whose H, S and V values differed by some amount(I picked 0.1 as my value) and then dropped out of the comparison loops if the H, S, or V counters exceed 38400 or 2% of the pixels (0.02 * 1600 * 1200). In the worst case, it took about 2 seconds to compare two identical images. When I compared images where one had been altered enough to exceed that 2% value, it generally took a fraction of a second.

Obviously, this would likely be too slow if there were lots of images being produced per second, but I thought it was interesting anyway.

itsmatt
Hashing is a nice idea, but pixel-by-pixel-analysis on the full image will not yield optimal performance because of the sheer amount of pixels per image (think of 1600*1200 bytes or 1.875 MB in greyscale)
sum1stolemyname
Seems like it depends on the speed of the algorithm used, as any algorithm that was expected to recognize differences over 1-2% would have to iterate over the whole file anyway.Perhaps do a quick subtraction of the images, then sum the remainders. That would be about as fast as you could make it while still examining the whole image.
tloflin
+1  A: 

I'm not the expert in the field, but it sounds like you need something like this

http://en.wikipedia.org/wiki/Template_matching

And it apears OpenCV has support for template matching
http://nashruddin.com/template-matching-in-opencv-with-example.html

Ivan
+1  A: 

I'd recommend looking at AForge Imaging library as it has a lot of really useful functions in it for this type of work.

There are several methods you could use:

  1. Simple subtraction (template image - current) and see how many pixels are different. You'd probably want to threshold the results, i.e. only include pixels that are different by 10 or more (for instance).
  2. If the tickets can move about in the field-of-view then item 1) isn't going to work unless you can locate the ticket first. If for instance the ticket is white on a black background you could do a threshold on the image and that would give you a good idea of where the ticket was.
  3. Another technique I've used it before is "Model Finding" or "Pattern Matching", but I only know of a commercial library Matrox Imaging Library (or MIL) that contains these functions as they aren't trivial.

Also you need to make sure you know which parts of the ticket are more important. For instance I guess that a missing logo or watermark is a big problem. But some areas could have variable text, such as a serial number and so you'd expect them to be different. Basically you might need to treat some areas of the image differently from others.

Matt Warren