views:

667

answers:

4

I am trying to make a DIY touchscreen and would like to enter it into the local science fair but wanted to do it on the programing aspect of Multi-touch. My problem lies in that i have never worked with analyzing images (from a USB based web-cam).
I would like to do this project in C# if possible (C++ - if worst comes to worst)
I need to analyze a black picture (from a USB web-cam) and then detect when white blotches come into view. How would i go about doing this? --- is there a known method for detecting the change vs analyzing every pxl? if so a pointer to where this is would be nice :)
Also how would i get the input from the Web-Cam via USB? --- where can i get the libraries / ddl's ?
I have seen some programs that work with this but they convert the images - which takes up time and processor speed... is there a way to use the raw imput image/data?
HELP?

+1  A: 

As a place to start wit the web cam, I would start here:

http://www.hanselman.com/blog/CapturingVideoAWebCameraUsingWIANotPossible.aspx

You'll have a few links to follow, but I'm suggesting you start here because this is the article where Scott talks about different challenges and common questions and provides links to more info.

David Stratton
+2  A: 

What you want is "blob-detection". Here is a good thread about a blob library

There is a heap of multi-touch/computer vision libraries and software out there already. The best resource for this kind thing is wiki.nuigroup.com. Especially the Frameworks and Libraries section. Currently there is not to much C# info there, but if you do find out something, make sure you put it on that wiki for everyone.

There is also the NuiGroup forum C# (.NET/Silverlight/WPF) section that should help you out a lot. There are some great help to be given in that community.

TandemAdam
+3  A: 

Once you have your input you will have (assuming it is perfectly dark and perfectly bright contrast) you will have an matrix of values similar to this: (gray scale)

0 0 0 0 0 0 0 0 0 0 0 0 
0 0 0 0 0 0 256 0 0 0 0 0
0 0 0 0 0 256 256 256 0 0 0 0
0 0 0 0 0 0 256 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0

Your job would need to be to segment out the section of the right color values (256) and to determine position. To get movement [assuming you are only tracking 1 object] you would have to compare the next position of the blob with the previous.

In the realworld, especially with a webcam you will never get a solid dark background with a good contrast. The webcam is low resolution, the light is never perfect, and noise is added in from the lense and CCD defects/color approximation.

Additionally, you may run into problems with tracking the blobs movement (a blob may be misdectected elsewhere on the surface. When you attempt to track two blobs you will run into more issues.

Some of these issues include:

  1. Blob collision (how do you know which blob goes where)
  2. Blob cross overs (did the blobs switch sides, or did the blobs move in a reverse direction from previous travel)
  3. Blob combining (where two blobs become one)

To grab the camera using C# you may want to check this out. WIA is not the quickest method to use to take pictures. However, it is a lot easier to deal with. My suggestion to you if you are still interested in doing this is to draw two images in photoshop and track the markers. Its not as exciting, but it will help you tackle the problem easier and relax the problem description.

monksy
A: 

To get images from the webcam the AForge framework is dead easy to use! Check out the motion detection sample app for code harvesting purposes :)

Kurru