Lane detection for Self Driving Cars

Aniketkhosa
7 min readMay 6, 2021

This blog is about deep learning solution for lane detection in self driving cars which i prepared for my final year project. You can find all code related to this project on my github.

About

People can find lane lines on the road fairly easily, even in a wide variety of conditions. Computers, on the other hand, do not find this easy. Shadows, glare, small changes in the color of the road, slight obstruction of the line…all things that people can generally still handle, but a computer may struggle mightily with.

Identifying lanes on the road is a common task performed by all human drivers to ensure their vehicles are within lane constraints when driving, so as to make sure traffic is smooth and minimise chances of collisions with other cars in nearby lanes. Similarly, it is a critical task for an autonomous vehicle to perform. It turns out that recognising lane markings on roads is possible using well known computer vision techniques. We will cover how to use various techniques to identify and draw the inside of a lane, compute lane curvature, and even estimate the vehicle’s position relative to the center of the lane.

To detect and draw a polygon that takes the shape of the lane the car is currently in, we build a pipeline consisting of the following steps:

  • Computation of camera calibration matrix and distortion coefficients from a set of chessboard images
  • Distortion removal on images
  • Application of color and gradient thresholds to focus on lane lines
  • Production of a bird’s eye view image via perspective transform
  • Use of sliding windows to find hot lane line pixels
  • Fitting of second degree polynomials to identify left and right lines composing the lane
  • Computation of lane curvature and deviation from lane center
  • Warping and drawing of lane boundaries on image as well as lane curvature information

1. Camera Calibration & Image Distortion Removal

Image distortion occurs when a camera looks at 3D objects in the real world and transforms them into a 2D image. This transformation isn’t always perfect and distortion can result in a change in apparent size, shape or position of an object. So we need to correct this distortion to give the camera an accurate view of the image. This is done by computing a camera calibration matrix by taking several chessboard pictures of a camera and using cv2.calibrateCamera() function.

To compute the camera the transformation matrix and distortion coefficients, we use multiple pictures of a chessboard on a flat surface taken by the same camera. OpenCV has a convenient method called findChessboardCorners that will identify the points where black and white squares intersect and reverse engineer the distorsion matrix this way. The image below shows the identified chessboard corners traced on a sample image:

chessboard corners traced on a sample image
distorted vs undistorted

2. Apply a distortion correction to raw images.

The calibration data for the camera that was collected in step 1 can be applied for raw images to apply distortion correction. An example image is shown here in Fig 3. It may be harder to see the effects of applying distortion correction on raw images compared to a chessboard image, but if you look closer at right of the image for comparison, this effect becomes more obvious when you look at the white car that has been slightly cropped along with the trees when the distortion correction was applied.

Fig 3. Before and after results of un-distorting an example image

3. Use color transforms, gradients, etc., to create a thresholded binary image.

The idea behind this step is to create an image processing pipeline where the lane lines can be clearly identified by the algorithm. There are a number of different ways to get to the solution by playing around with different gradients, thresholds and color spaces. I experimented with a number of these techniques on several different images and used a combination of thresholds, color spaces, and gradients. I settled on the following combination to create my image processing pipeline: S channel thresholds in the HLS color space and V channel thresholds in the HSV color space, along with gradients to detect lane lines. An example of a final binary thresholded image is shown in Fig 4, where the lane lines are clearly visible.

Fig 4. Before and after results of applying gradients and thresholds to generate a binary thresholded image

4. Apply a perspective transform to generate a “bird’s-eye view” of the image.

Images have perspective which causes lanes lines in an image to appear like they are converging at a distance even though they are parallel to each other. It is easier to detect curvature of lane lines when this perspective is removed. This can be achieved by transforming the image to a 2D Bird’s eye view where the lane lines are always parallel to each other. Since we are only interested in the lane lines, I selected four points on the original un-distorted image and transformed the perspective to a Bird’s eye view as shown in Fig 5 below.

Fig 5. Region of interest perspective warped to generate a Bird’s-eye view

5. Detect lane pixels and fit to find the lane boundary.

To detect the lane lines, there are a number of different approaches. I used convolution which is the sum of the product of two separate signals: the window template and the vertical slice of the pixel image. I used a sliding window method to apply the convolution, which will maximize the number of hot pixels in each window. The window template is slid across the image from left to right and any overlapping values are summed together, creating the convolved signal. The peak of the convolved signal is where the highest overlap of pixels are and it is the most likely position for the lane marker. Methods have been used to identify lane line pixels in the rectified binary image. The left and right lines have been identified and fit with a curved polynomial function. Example images with line pixels identified with the sliding window approach and a polynomial fit overlapped are shown in Fig 6.

Fig 6. Sliding window fit results

6. Determine the curvature of the lane and vehicle position with respect to the center of the car.

I took the measurements of where the lane lines are and estimated how much the road is curving, along with the vehicle position with respect to the center of the lane. I assumed that the camera is mounted at the center of the car.

7. Warp the detected lane boundaries back onto the original image and display numerical estimation of lane curvature and vehicle position.

The fit from the rectified image has been warped back onto the original image and plotted to identify the lane boundaries. Fig 7 demonstrates that the lane boundaries were correctly identified and warped back on to the original image. An example image with lanes, curvature, and position from center is shown in Fig 8.

Fig 7. Lane line boundaries warped back onto original image

Fig 8. Detected lane lines overlapped on to the original image along with curvature radius and position of the car

Final Results

Video Credits: Youtube/Self shot

--

--