Hi, I am doing a project in C# called User Initiated Real Time Object Tracking. What I want is, take input from a webcamera to a picturebox(done using dshownet), then draw a rubber band rectangle on the video(say on a person's face/eye/nose/whole body - i think i am going to scope it down to face) using a mouse, then I want to track the area enclosed by the rubber band rectangle.
I am currently going through dshownet bitmapmixer sample in the samples folder to learn how to draw on a video(no success yet, however, i have done the rubberband rectangle, its now a matter of making it work on top of the video).
My main issue is to track, what is enclosed within the rubber band rectangle(after the rectangle is drawn it stays visible, unless erased with a button command). Someone said to go through Face and eyes Detection(I dont do detection in my project really). Well, I might be wrong.
The way I think of it is as this, I think if I could consider the area outside the rectangle as the background, and the extract the colour histograms of the area within the rectangle(foreground) and check if it pops up in the subsequent frames, I can successfully track[I actually dont know how to achieve this via code]. Is this correct ??
By the way, I consider the tracker to be the rectangle, which will stay visible as long as the video is streaming and move along with the person in the video. To start off, I am experimenting all this with a saved video file.
Any ideas on how to do the tracking ? Does it matter what part of the human I take to track its movements like face or whole body ?
Thank you for your time.