Embedded computing techniques for vision-based lane change decision aid systems

Incorrect assessment of the positions and speeds of nearby vehicles will compromise road safety due to unsafe lane change decisions. Vision-based lane change decision aid systems (LCDAS) are being increasingly explored to facilitate the automatic assessment of the scene around the host vehicle. In t...

Full description

Saved in:
Bibliographic Details
Main Author: Satzoda Ravi Kumar
Other Authors: Thambipillai Srikanthan
Format: Theses and Dissertations
Language:English
Published: 2013
Subjects:
Online Access:https://hdl.handle.net/10356/54953
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:Incorrect assessment of the positions and speeds of nearby vehicles will compromise road safety due to unsafe lane change decisions. Vision-based lane change decision aid systems (LCDAS) are being increasingly explored to facilitate the automatic assessment of the scene around the host vehicle. In this thesis, computationally efficient techniques that can operate under complex road scene environment are proposed for a vision-based LCDAS. Detection of multiple lanes is challenging particularly due to varying prominence of the associated edge features depending on their perspective with respect to the camera. In this thesis, a novel method is proposed for the automatic detection of host and neighbor lanes in the near view. The proposed method involves the systematic investigation of gradient magnitude histograms (GMH), gradient angle histograms (GAH) and Hough Transform (HT) iteratively to cater for varying prominence of lane markings. Evaluation of the proposed method (referred to as GMH-GAH-HT method) using a dataset consisting of more than 6000 images in various complex conditions shows that it is capable of high detection rates of 97% and 95% for host and neighbor lanes respectively. It was observed that the lanes in far-view are fainter and smaller in size compared to those in near view. The positions of host and neighbor lanes in the near-view are relied upon to deploy the GMH-GAH-HT method to systematically ascertain the far-view lanes. Selective processing of faint edges was introduced to enhance the robustness of the proposed technique to detect the host and neighbor lanes in the far view. A block-level HT computation process called Additive HT (AHT) is also proposed to exploit the inherent parallelisms of HT, resulting in order of magnitude speed up. The computation complexity of combining the block-level Hough spaces is also significantly reduced by introducing a hierarchical derivative of the AHT called the HAHT. In addition, the proposed method for detecting multiple lanes in the far-view is capable of estimating curved lanes by breaking them into a number of smaller straight lines. The detection of vehicles and their proximity in the RoI is tackled next. Two main characteristics, namely, ‘under-vehicle shadow’ and ‘multiple edge symmetry of vehicles’ were relied upon to first establish the presence of a vehicle. The detection of the under-vehicle shadow necessitated the automatic determination of the binarization threshold for varying lighting and road conditions. The proposed linear-regression based technique was evaluated on an exhaustive dataset to confirm that it can adapt to varying illumination conditions. Selective deployment of the GMH-GAH-HT made it possible to extract multiple edge-symmetry cues so as to further ascertain the presence of vehicles. The proposed method for vehicle detection is shown to yield a high vehicle detection rate of 95% on a test dataset with images taken under varying road, illumination and weather conditions. The lane width in the immediate vicinity of the target vehicle was then employed to estimate the proximity of the vehicle from the host vehicle. Unlike conventional methods that rely on stereo vision and 3-D models for estimating proximity of the detected vehicles, the proposed method has been shown to work with 2-D images resulting in a notable reduction in computational complexity. Verification against ground truth confirms that the proposed method is capable of estimating the relative distance of a target vehicle from the ego vehicle with an accuracy of 1m in the near view and 4m in the far view. Unlike 4-wheel vehicles like cars and lorries, detection of motorcycles cannot rely on prominent under-vehicle shadows, clearly defined edges and symmetry signatures. This motivated the development of a novel method to detect the tyre region of the motorcycles. The proposed method is adaptive to local illumination conditions by examining the intensities immediately adjacent to lane markings. Systematic exploration of the surrounding areas around the tyre region was also carried out to further strengthen the identification process. An early evaluation of the proposed technique shows promising results in complex scenarios. Existing LCDAS mainly focus on the blind spot detection aspect of the lane change decision aid process, as comprehensive assessment of the 360 degree scene around the vehicle is still a rather complex and computationally demanding process. In this work, a two-camera system consisting of front and rear facing monocular cameras has been employed to establish a near 360 degree field of view. The proximity of vehicles surrounding the host vehicle and their speeds were incorporated into a Gaussian risk function for estimating the risks posed by different vehicles in the front and rear views. A state machine was also introduced to monitor the blind spot region by combining the risk information of vehicles in front and rear RoIs. Finally, the proposed techniques lend well for compute-efficient realizations and simulations on real video sequences show that the integrated framework can be deployed to deterministically evaluate the risks associated with lane change maneuvers.