Revisiting Lane Detection using OpenCV

Date Fri 16 August 2013
 

I originally tried to get lane detection working for my autonomous car project a little over a year ago. I ended up getting some rough code working but it was really only useful on ideal roads with perfectly painted lines. At the time I didn't know a whole lot about computer vision so I ended up ditching the computer vision part of the project to focus on other areas.

So now I'm back with a new project that requires computer vision. The goal of this new project (which will be another post later on) is to track objects at relatively long distances of 60 meters or more. The list of off the shelf sensors that can do this is pretty slim with the best being LIDAR sensors that will run you ~$60K. I plan instead to use a CV algorithm to track objects and a laser range finder on a two axis servo mount to locate objects. The crux of this project being again the CV algorithm. To start off I'm getting my feet wet by revisiting the lane finding algorithm.

I'm not going at it alone and instead decided to pick up a few books on OpenCV. "Learning OpenCV" has some now outdated code but also has some great explanations for how the different CV algorithms work. I've been using this as my theory book that I reference when trying to figure out how a particular CV functions works. The "OpenCv Cookbook" has been good for examples on how to actually write the code. I'm borrowing a bit of the code from the line detection chapter of this book to do lane detection.

I'll walk through the code step by step with images of each function as we go.

To start off I'm reading in the image to process from the terminal:

    int main(int argc, char* argv[]) {
    // Read input image
    Mat image= cv::imread(argv[1]);
    if (!image.data)
    return 0; 


Original Image

image

This is the standard test image that I've been working with since the autonomous car project. I think it gives a best case scenario for what a road should look like.

From the original I apply two main filters. The first being a Hough transform The Hough filter uses a binary map as its' input. One way to produce a binary map is with a Canny algorithm. Canny runs a gradient on the image to find sharp changes in the pixel intensities. These are likely contours in the image. The output is then just a binary map that shows you were the contours of the image are located.

   // Canny algorithm
    Mat contours;
    Canny(image,contours,50,350);
    Mat contoursInv;
    threshold(contours,contoursInv,128,255,THRESH_BINARY_INV);


Contour Image

image

Next is to actually apply the Hough transform. I'll save the detained explanations for the texts and just show you part that's important for understanding my code. The Hough transform represents lines using two parameters rho and theta. Rho is the distance of the line from the origin (upper left corner) and theta is the angle of the line perpendicular to the detected line. These two parameters can be useful later on if you want to filter lines out based on their angle or location on the image.

    std::vector<Vec2f> lines;
    if (houghVote < 1 or lines.size() > 2){ // we lost all lines. reset 
        houghVote = 200; 
    }
    else{ houghVote += 25;} 
    while(lines.size() < 5 && houghVote > 0){
        HoughLines(contours,lines,1,PI/180, houghVote);
        houghVote -= 5;
    }
    std::cout << houghVote << "\n";
    Mat result(contours.rows,contours.cols,CV_8U,Scalar(255));
    image.copyTo(result);

One of the troubles I ran into was how to set the value for the minimum number points passing through a line (houghVote). The best way I found to do this was to set a feedback loop starting with a high number of required points. The houghVote decreases until if finds at least two lines. For the next frame, in the case of a video, we increment houghVote by 25 to make sure not to miss any new lines that might appear.

   // Draw the limes
    std::vector<Vec2f>::const_iterator it= lines.begin();
    Mat hough(image.size(),CV_8U,Scalar(0));
    while (it!=lines.end()) {

        float rho= (*it)[0];   // first element is distance rho
        float theta= (*it)[1]; // second element is angle theta

        //if (theta < PI/20. || theta > 19.*PI/20.) { // filter theta angle to find lines with theta between 30 and 150 degrees (mostly vertical)

            // point of intersection of the line with first row
            Point pt1(rho/cos(theta),0);        
            // point of intersection of the line with last row
            Point pt2((rho-result.rows*sin(theta))/cos(theta),result.rows);
            // draw a white line
            line( result, pt1, pt2, Scalar(255), 8); 
            line( hough, pt1, pt2, Scalar(255), 8);
        //}

        //std::cout << "line: (" << rho << "," << theta << ")\n"; 
        ++it;
    }

Below is the result of the Hough transform.


Hough Image

image

After that in a separate copy of the image I run a Probabilistic Hough Transform which is pretty much the same as the regular Hough Transform but this one finds the ends of each line. To do this I first created a lineFinder instance (from OpenCV Cookbook), set minimum line length, gap, and vote. Then the contour image is sent through the probabilistic transform.

   // Create LineFinder instance
    LineFinder ld;

   // Set probabilistic Hough parameters
    ld.setLineLengthAndGap(60,10);
    ld.setMinVote(4);

   // Detect lines
    std::vector<Vec4i> li= ld.findLines(contours);
    Mat houghP(image.size(),CV_8U,Scalar(0));
    ld.drawDetectedLines(houghP);


Probabilistic Hough Image

image

Now I have both a regular Hough transform image and a probabilistic transform image. I noticed that both tend to do a good job finding lanes. The regular transform does not find endpoints and the probabilistic tends to find the lanes a several other lines that I don't want. To solve this problem I do a bitwise addition of the two images. The two Hough images were first drawn on separate blank images then sent to bitWise_and() which outputs only the lines that appear in both images. The result is the final processed image below.

Processed Image

image

And finally here is a video of the algorithm running.

The algorithm seems to do an okay job of finding the lanes in the video. A few problems are that it also picks up lines from other things like power lines and the horizon. To fix this I'm working on added a region of interest to the camera image. I can use this to only process the area where there is likely to be a lane. Another thing I'd like to add is a filter that removes vertical and horizontal lines using the theta angle calculated from the Hough transform.

Comments !

comments powered by Disqus