Author: Dhruv Shrivastava
Imagine a car which is capable of sensing the environment, navigating through the roads, fulfilling the transportations capabilities without any human effort and input. This car is called Autonomous cars or self-driving cars. Autonomous cars can analyse the surrounding with camera, radar, lidar, GPS and navigational paths and can work accordingly with that data without any human support. I think no one can disagree that in modern era, the self-driving cars are the future technology that will be making a huge impact in the people lives. I think everyone is familiar with Tesla, Google and others companies work in self -driving cars like Tesla Model S whose Autopilot handles highway driving and many more. One of the main tasks of Automated vehicles is increasing safety and reducing road accidents, and thus saving lives. Apparently, among many complex and challenging tasks of automated vehicles is road lane detection or road lane boundaries detection.
Fig.1: Self driving car.
In this paper we will implement a lane detection technique using OpenCV which is the open-source library used for computer vision, machine learning and image processing. In this we will implement lane detection technique Simple Lane Detection which detect straight.
What is Computer Vision?
Computer Vision is a field of Artificial Intelligence the trains machines to interpret and understand the real-life world. Using techniques like cameras, videos and deep learning models machines can accurately classify objects and identify them on their own and then makes decisions considering the data they interpreted. Today computer vision is used for many purposes like Image segmentation, object detection, Edge detection, Facial recognition, Pattern detection etc.
Fig.2
Lane detection Implementation:
While driving any vehicle, lane lines are important component of indicating a traffic flow and where should the vehicle drive. It is essential to remain in a single lane and to avoid crossing lanes so that accidents don’t happen. Lane detection is a good starting point for building your own self driving car in OpenCV and Python.
Simple Lane Detection:
Simple lane detection is lane detection technique which detects straight lane. We will be using “Atom” text editor or “Sublime” whatever you like working with. The purpose of this is to develop a program that can identify lane lines in a picture or a video. Here’s the structure of our lane detection pipeline.
Reading Images .
Canny Edge Detection.
Region of Interest.
Hough Line detection..
Line filtering and Averaging.
Step1: Importing the required libraries and reading the image.
When creating a image processing pipeline, we have to read the image that we have to use to test our pipeline. In OpenCV the images are read by “cv2.imread” as shown below:
Output:
Fig.3: Image.
Step2: Appling edge detection on image.
The goal of edge detection is to identify the boundaries of objects within a image. With edge detection we identify the sharp changes in intensity in adjacent pixels. As we know that image contains pixels which have light intensity at some instance and its numeric value changes from 0 to 255.
Fig.4:Self driving car.
As you can see in the above image the outline of white pixels corresponds to the discontinuity and brightness at the points and this helps us to identify the edges in our image, since edge is identified by the difference in intensity values in adjacent pixels and wherever there is sharp change in intensity that indicates strong gradient and by tracing out all these pixels, we obtain the edges. In edge detection techniques first, we have to convert out RGB colorspace image to Grayscale image.
Code:
Outputs:
Fig.5: Grayscale image.
Fig.6: Blurred grayscale image.
FIG.7: Canny edge detection image.
We used Gaussian Blur to smoothened our image and reduce all the unnecessary noise from the image.
Step 3: Region of Interest.
In the previous step we identified the edges in images and now we will identify interested lane line in the image.
Fig.8: Region of interest .
We will use matplotlib to get the dimensions of region of interest which is triangle and create a completely black image and mask that image with the polygon of region of interest. The mask we created will have same size as the canny edge detected image.
Fig.9: Masked image.
Now we will use this masked image to show the specific portion of the image. We will do this by the concept of binary numbers, as we know numeric values of pixels ranges from 0 to 255 and each of these values can be represent in binary form so the binary form of 0 will be 0000 whereas the binary form of 255 will be 1111. So, in our masked image above the white coloured polygon have 1111 binary intensity pixels whereas outside the polygon the black area has 0000 binary intensity pixels. Now we are going to apply mask on our canny image to ultimately show the region of interest and we do this by applying the bitwise and operation between the two images which occurs elements wise between two images. So going back to our two images, the black region which have pixel intensity of 0000 binary value corresponding to the region in the other image. The result will always be 0000 that means all the intensity in that region will be zero and that region will be completely black. Now similarly the bitwise and operation will occur between the white region of our mask and the corresponding pixels in image. The white region has intensity of 1111 binary value so, bitwise and between 1111 and any other binary value will give same binary value and there will be no effect.
Code:
Output:
Fig.10: Output.
Step4: Hough Line detection.
Now we will use Hough transform to detect the straight lines in the image and thus identify the lane lines. We will be computing cv2.HoughLinesP() which will filter out the lines for us. We will start by actually displaying these lines into our real image and we will do this by function “def display_line”. It will take the image and detect the straight lines in our image. The output of function will be:
Code:
Output:
Fig.11: Hough line detection.
Step5: Optimizing Hough line detection.
The lines that are currently displayed seems to be inconsistent but we need to have a consistent straight line to be displayed on our image so for this we will start by averaging the slopes of multiple lines passing through a point into a single line that traces both our lines getting into it. After that we will remove lines that are outside a determined slope range and location. Now, we will calculate the slope and intercept of each lines in a particular lane and then averaging the slope and intercepts to produce one line.
Code:
Output:
Fig.12: Optimized Hough detected lines.
Step5: Adding previously detected lines to our original image.
We will do this by cv2. AddWeighted() and thus displaying our lane detected image.
Final output:
Fig:13:Lane detected image.
Step6: Finding lanes in Video.
In this step we will find lane in a video. For this all the process will be same as finding lane in image. First, we will capture the video with cv2.VideoCapture() and then we will use read function to decode every video frame and the each frame of video will show lines.
Comentarios