views:

81

answers:

3

Hi, I'm using the OpenCV wrapper - Emgu CV, and I'm trying to implement a motion tracker using Optical Flow, but I can't figure out a way to combine the horizontal and vertical information retrieved from the OF algorithm:

flowx = new Image<Gray, float>(size);
flowy = new Image<Gray, float>(size);

OpticalFlow.LK(currImg, prevImg, new Size(15, 15), flowx, flowy);

My problem is not knowing how to combine the info of vertical and horizontal movement in order to build the tracker of moving objects? A new image?

By the way, there is a easy way to display the flow info on the current frame?

Thanks in advance.

A: 

There are some known optical flow algorithms. one of them that may be good for you is Lucas Kanade.. you can find a matlab source here

Gilad
Hi, thanks for your reply, but my problem is not the implementation of the algorithms of Optical Flow, since they are already in EMGU CV framework, my resides in the fact, that i don't know what to do with results of the optical flow functions.
Staticsoul
+2  A: 

Here is the function i have defined in my youtube head movement tracker video tutorial. You can find the full source code attached to the video

void ComputeDenseOpticalFlow()
    {
        // Compute dense optical flow using Horn and Schunk algo
        velx = new Image<Gray, float>(faceGrayImage.Size);
        vely = new Image<Gray, float>(faceNextGrayImage.Size);

        OpticalFlow.HS(faceGrayImage, faceNextGrayImage, true, velx, vely, 0.1d, new MCvTermCriteria(100));            

        #region Dense Optical Flow Drawing
        Size winSize = new Size(10, 10);
        vectorFieldX = (int)Math.Round((double)faceGrayImage.Width / winSize.Width);
        vectorFieldY = (int)Math.Round((double)faceGrayImage.Height / winSize.Height);
        sumVectorFieldX = 0f;
        sumVectorFieldY = 0f;
        vectorField = new PointF[vectorFieldX][];
        for (int i = 0; i < vectorFieldX; i++)
        {
            vectorField[i] = new PointF[vectorFieldY];
            for (int j = 0; j < vectorFieldY; j++)
            {
                Gray velx_gray = velx[j * winSize.Width, i * winSize.Width];
                float velx_float = (float)velx_gray.Intensity;
                Gray vely_gray = vely[j * winSize.Height, i * winSize.Height];
                float vely_float = (float)vely_gray.Intensity;
                sumVectorFieldX += velx_float;
                sumVectorFieldY += vely_float;
                vectorField[i][j] = new PointF(velx_float, vely_float);

                Cross2DF cr = new Cross2DF(
                    new PointF((i*winSize.Width) +trackingArea.X,
                               (j*winSize.Height)+trackingArea.Y),
                               1, 1);
                opticalFlowFrame.Draw(cr, new Bgr(Color.Red), 1);

                LineSegment2D ci = new LineSegment2D(
                    new Point((i*winSize.Width)+trackingArea.X,
                              (j * winSize.Height)+trackingArea.Y), 
                    new Point((int)((i * winSize.Width)  + trackingArea.X + velx_float),
                              (int)((j * winSize.Height) + trackingArea.Y + vely_float)));
                opticalFlowFrame.Draw(ci, new Bgr(Color.Yellow), 1);

            }
        }
        #endregion
    }
Luca Del Tongo
+2  A: 

Optical flow visualization. The common approach is to use a color-coded 2D flow field. It means that we display the flow as an image, where pixel intensity corresponds to the absolute value of the flow in the pixel, while the hue reflects the direction of the flow. Look at Fig.2 in [Baker et al., 2009]. Another way is to draw vectors of flow in a grid over the first image (say, every 10 pixels).

Combining x and y. It is not clear what you mean here. The pixel (x,y) from the first image is moved to (x+flowx, y+flowy) on the second one. So, to track an object you fix the position of the object on the first image and add the flow value to get its position on the second one.

overrider