views:

407

answers:

1

As from my previous questions, I am trying to build a simple eye tracker. Decided to start from a Linux version (run Ubuntu).

To complete this task one should organize screencasting and webcam capturing in such way that frames from both streams exactly match each other and there is the same number of frames in each of them totally.

Screencasting fps fully depends on the camera's fps, so each time we get the image from the webcam we can potentially grab a screen frame and stay happy. However, all the tools for the fast screencasting, like ffmpeg, for example, return the .avi file as the result and require the fps already known to be started.

From the other side, tools like Java+Robot or ImageMagick seem to require around 20ms to return the .jpg screenshot, which is pretty slow for the task. But they may be requested right after each time the webcam frame is grabbed and provide the needed synchronization.

So the sub-questions are:

  1. Does the USD camera's frame rate vary during a single session?
  2. Are there any tools which provide fast screencasting frame by frame?
  3. Is there any way to make ffmpeg push a new frame to the .avi file only when program initiates this request?

For my task I may either use C++ or Java.

I am, actually, an interface designer, not the driver programmer, and this task seems to be pretty low-level. I would be grateful for any suggestion and tip!

+1  A: 

Use the cvGetCaptureProperty(CvCapture* capture, int property_id) function from OpenCV's HighGUI with property_id = CV_CAP_PROP_FPS to determine the frames per second captured by your webcam.

Example use:

CvCapture *capture = 0;
double fps = 0.0;

capture = cvCaptureFromCAM( 0 );
fps = cvCaptureProperty(capture, CV_CAP_PROP_FPS);
metaliving
Good idea, however how would you recommend to organize simultaneous exactly-matching capturing in this case?Should I use two threads? Or just initialize ffmpeg capturer right after the first frame from the webcam is received and we know the fps?Any suggestions are welcome, thanks!
lyuba
Not sure about ffmpeg, but you can use the `cvCreateVideoWriter()` function create an .avi file with the video feed, extracting in a loop each frame with the `cvGrabFrame()` function and writing it on the video file with `cvWriteFrame()`. In the same loop, you can extract an image from the screen and write it to another video file. I'm confident that if you set the FPS of your screencaster equal to the camera's, you will achieve synchronization.
metaliving
Metaliving, I've tried your method, but, unfortunately, CV_CAP_PROP_FPS property works only in Windows, as from the OpenCV documentation. The error is:HIGHGUI ERROR: V4L2: getting property #5 is not supported.I work under Ubuntu and also want my project to be cross-platform, so have to seek for another way for doing it.By the way, your code has an error: there should be cvGetCaptureProperty (not cvCaptureProperty) in there.Anyways, thank you for your comments!
lyuba