views:

43

answers:

1

Is it possible to detect and track a road, in an autonomous vehicle, using Hough Transform? If so, are there any algorithms that implement this already? Would love a link to one as I haven't really been able to find any that aren't way over my head.

In particular, I'm looking for algorithms that use the vanishing point of two straight lines to determine the vehicle's heading. However, if there are other more simpler algorithms that perform the job, I'm willing to take a look at them as well.

+1  A: 

Yes, you could do this, but it might not work at the quality you want if it's all you do. The task isn't simple, there's no "simple" algorithm that just does "road" or "heading" detection based on images. Also, there are existing implementations in a number of languages, here's one in C++.

One thing you should consider is that roads aren't always straight, so the vanishing point could be around a turn in the road...

The Stanford DARPA Grand Challenge and DARPA Urban Challenge vehicle used color based detection to detect drivable surface (e.g. road) and then used some sort of edge detection and line-forming algorithm (it's unclear if it was Hough Transform based) to define a "forward looking" estimation of road direction. I do believe they used some kind of system to detect the vanishing point, and they definitely accounted for turns in the road.

I suppose what you really need to detect is the road edges, turn those into lines (not necessarily straight) and then find convergence points. This assumes a number of other hard tasks can be solved: (1) your imagery is of appropriate quality; (2) you can detect the road, or at a minimum its edges; (3) you can process the imagery fast enough to keep up with vehicle movement.

If all you're doing is analyzing some existing video, I'd start with a very basic approach:

  1. Detect the road surface in the video -- this is a segmentation task, find all pixels in the image which are road; it will help here to have three classes to segment: road, not-road, and sky.
  2. Find the horizon (this is roughly where your road/not-road and sky classes meet
  3. Use a simple edge detector (say a Sobel edge detector) to differentiate the edges between road and non-road
  4. Apply a Hough Transform on the Sobel edges to draw "lines" for the edge of the road
  5. Find where the road-lines meet at the horizon
Mark E
I'm trying to things one by one. The goal isn't really make an actual vehicle, but just a program that can take a video and detect roads and report the heading. I think this goal is relatively easier. Is the Darpa implementation based on colour segmentation i.e they assume that the road is going to be of a very different colour than the environment?
saad
@saad: I haven't seen the code or read the papers, but my best guess is they start with color and if that's insufficient do some kind of textural detection; wavelets might be appropriate for the texture
Mark E