Embedded computing techniques for vision-based lane change decision aid systems
Incorrect assessment of the positions and speeds of nearby vehicles will compromise road safety due to unsafe lane change decisions. Vision-based lane change decision aid systems (LCDAS) are being increasingly explored to facilitate the automatic assessment of the scene around the host vehicle. In t...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Theses and Dissertations |
Language: | English |
Published: |
2013
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/54953 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
id |
sg-ntu-dr.10356-54953 |
---|---|
record_format |
dspace |
institution |
Nanyang Technological University |
building |
NTU Library |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
NTU Library |
collection |
DR-NTU |
language |
English |
topic |
DRNTU::Engineering::Computer science and engineering::Theory of computation::Analysis of algorithms and problem complexity DRNTU::Engineering::Computer science and engineering::Computing methodologies::Image processing and computer vision DRNTU::Engineering::Computer science and engineering::Computer systems organization::Special-purpose and application-based systems |
spellingShingle |
DRNTU::Engineering::Computer science and engineering::Theory of computation::Analysis of algorithms and problem complexity DRNTU::Engineering::Computer science and engineering::Computing methodologies::Image processing and computer vision DRNTU::Engineering::Computer science and engineering::Computer systems organization::Special-purpose and application-based systems Satzoda Ravi Kumar Embedded computing techniques for vision-based lane change decision aid systems |
description |
Incorrect assessment of the positions and speeds of nearby vehicles will compromise road safety due to unsafe lane change decisions. Vision-based lane change decision aid systems (LCDAS) are being increasingly explored to facilitate the automatic assessment of the scene around the host vehicle. In this thesis, computationally efficient techniques that can operate under complex road scene environment are proposed for a vision-based LCDAS. Detection of multiple lanes is challenging particularly due to varying prominence of the associated edge features depending on their perspective with respect to the camera. In this thesis, a novel method is proposed for the automatic detection of host and neighbor lanes in the near view. The proposed method involves the systematic investigation of gradient magnitude histograms (GMH), gradient angle histograms (GAH) and Hough Transform (HT) iteratively to cater for varying prominence of lane markings. Evaluation of the proposed method (referred to as GMH-GAH-HT method) using a dataset consisting of more than 6000 images in various complex conditions shows that it is capable of high detection rates of 97% and 95% for host and neighbor lanes respectively. It was observed that the lanes in far-view are fainter and smaller in size compared to those in near view. The positions of host and neighbor lanes in the near-view are relied upon to deploy the GMH-GAH-HT method to systematically ascertain the far-view lanes. Selective processing of faint edges was introduced to enhance the robustness of the proposed technique to detect the host and neighbor lanes in the far view. A block-level HT computation process called Additive HT (AHT) is also proposed to exploit the inherent parallelisms of HT, resulting in order of magnitude speed up. The computation complexity of combining the block-level Hough spaces is also significantly reduced by introducing a hierarchical derivative of the AHT called the HAHT. In addition, the proposed method for detecting multiple lanes in the far-view is capable of estimating curved lanes by breaking them into a number of smaller straight lines. The detection of vehicles and their proximity in the RoI is tackled next. Two main characteristics, namely, ‘under-vehicle shadow’ and ‘multiple edge symmetry of vehicles’ were relied upon to first establish the presence of a vehicle. The detection of the under-vehicle shadow necessitated the automatic determination of the binarization threshold for varying lighting and road conditions. The proposed linear-regression based technique was evaluated on an exhaustive dataset to confirm that it can adapt to varying illumination conditions. Selective deployment of the GMH-GAH-HT made it possible to extract multiple edge-symmetry cues so as to further ascertain the presence of vehicles. The proposed method for vehicle detection is shown to yield a high vehicle detection rate of 95% on a test dataset with images taken under varying road, illumination and weather conditions. The lane width in the immediate vicinity of the target vehicle was then employed to estimate the proximity of the vehicle from the host vehicle. Unlike conventional methods that rely on stereo vision and 3-D models for estimating proximity of the detected vehicles, the proposed method has been shown to work with 2-D images resulting in a notable reduction in computational complexity. Verification against ground truth confirms that the proposed method is capable of estimating the relative distance of a target vehicle from the ego vehicle with an accuracy of 1m in the near view and 4m in the far view. Unlike 4-wheel vehicles like cars and lorries, detection of motorcycles cannot rely on prominent under-vehicle shadows, clearly defined edges and symmetry signatures. This motivated the development of a novel method to detect the tyre region of the motorcycles. The proposed method is adaptive to local illumination conditions by examining the intensities immediately adjacent to lane markings. Systematic exploration of the surrounding areas around the tyre region was also carried out to further strengthen the identification process. An early evaluation of the proposed technique shows promising results in complex scenarios. Existing LCDAS mainly focus on the blind spot detection aspect of the lane change decision aid process, as comprehensive assessment of the 360 degree scene around the vehicle is still a rather complex and computationally demanding process. In this work, a two-camera system consisting of front and rear facing monocular cameras has been employed to establish a near 360 degree field of view. The proximity of vehicles surrounding the host vehicle and their speeds were incorporated into a Gaussian risk function for estimating the risks posed by different vehicles in the front and rear views. A state machine was also introduced to monitor the blind spot region by combining the risk information of vehicles in front and rear RoIs. Finally, the proposed techniques lend well for compute-efficient realizations and simulations on real video sequences show that the integrated framework can be deployed to deterministically evaluate the risks associated with lane change maneuvers. |
author2 |
Thambipillai Srikanthan |
author_facet |
Thambipillai Srikanthan Satzoda Ravi Kumar |
format |
Theses and Dissertations |
author |
Satzoda Ravi Kumar |
author_sort |
Satzoda Ravi Kumar |
title |
Embedded computing techniques for vision-based lane change decision aid systems |
title_short |
Embedded computing techniques for vision-based lane change decision aid systems |
title_full |
Embedded computing techniques for vision-based lane change decision aid systems |
title_fullStr |
Embedded computing techniques for vision-based lane change decision aid systems |
title_full_unstemmed |
Embedded computing techniques for vision-based lane change decision aid systems |
title_sort |
embedded computing techniques for vision-based lane change decision aid systems |
publishDate |
2013 |
url |
https://hdl.handle.net/10356/54953 |
_version_ |
1759853491544653824 |
spelling |
sg-ntu-dr.10356-549532023-03-04T00:37:28Z Embedded computing techniques for vision-based lane change decision aid systems Satzoda Ravi Kumar Thambipillai Srikanthan School of Computer Engineering Centre for High Performance Embedded Systems DRNTU::Engineering::Computer science and engineering::Theory of computation::Analysis of algorithms and problem complexity DRNTU::Engineering::Computer science and engineering::Computing methodologies::Image processing and computer vision DRNTU::Engineering::Computer science and engineering::Computer systems organization::Special-purpose and application-based systems Incorrect assessment of the positions and speeds of nearby vehicles will compromise road safety due to unsafe lane change decisions. Vision-based lane change decision aid systems (LCDAS) are being increasingly explored to facilitate the automatic assessment of the scene around the host vehicle. In this thesis, computationally efficient techniques that can operate under complex road scene environment are proposed for a vision-based LCDAS. Detection of multiple lanes is challenging particularly due to varying prominence of the associated edge features depending on their perspective with respect to the camera. In this thesis, a novel method is proposed for the automatic detection of host and neighbor lanes in the near view. The proposed method involves the systematic investigation of gradient magnitude histograms (GMH), gradient angle histograms (GAH) and Hough Transform (HT) iteratively to cater for varying prominence of lane markings. Evaluation of the proposed method (referred to as GMH-GAH-HT method) using a dataset consisting of more than 6000 images in various complex conditions shows that it is capable of high detection rates of 97% and 95% for host and neighbor lanes respectively. It was observed that the lanes in far-view are fainter and smaller in size compared to those in near view. The positions of host and neighbor lanes in the near-view are relied upon to deploy the GMH-GAH-HT method to systematically ascertain the far-view lanes. Selective processing of faint edges was introduced to enhance the robustness of the proposed technique to detect the host and neighbor lanes in the far view. A block-level HT computation process called Additive HT (AHT) is also proposed to exploit the inherent parallelisms of HT, resulting in order of magnitude speed up. The computation complexity of combining the block-level Hough spaces is also significantly reduced by introducing a hierarchical derivative of the AHT called the HAHT. In addition, the proposed method for detecting multiple lanes in the far-view is capable of estimating curved lanes by breaking them into a number of smaller straight lines. The detection of vehicles and their proximity in the RoI is tackled next. Two main characteristics, namely, ‘under-vehicle shadow’ and ‘multiple edge symmetry of vehicles’ were relied upon to first establish the presence of a vehicle. The detection of the under-vehicle shadow necessitated the automatic determination of the binarization threshold for varying lighting and road conditions. The proposed linear-regression based technique was evaluated on an exhaustive dataset to confirm that it can adapt to varying illumination conditions. Selective deployment of the GMH-GAH-HT made it possible to extract multiple edge-symmetry cues so as to further ascertain the presence of vehicles. The proposed method for vehicle detection is shown to yield a high vehicle detection rate of 95% on a test dataset with images taken under varying road, illumination and weather conditions. The lane width in the immediate vicinity of the target vehicle was then employed to estimate the proximity of the vehicle from the host vehicle. Unlike conventional methods that rely on stereo vision and 3-D models for estimating proximity of the detected vehicles, the proposed method has been shown to work with 2-D images resulting in a notable reduction in computational complexity. Verification against ground truth confirms that the proposed method is capable of estimating the relative distance of a target vehicle from the ego vehicle with an accuracy of 1m in the near view and 4m in the far view. Unlike 4-wheel vehicles like cars and lorries, detection of motorcycles cannot rely on prominent under-vehicle shadows, clearly defined edges and symmetry signatures. This motivated the development of a novel method to detect the tyre region of the motorcycles. The proposed method is adaptive to local illumination conditions by examining the intensities immediately adjacent to lane markings. Systematic exploration of the surrounding areas around the tyre region was also carried out to further strengthen the identification process. An early evaluation of the proposed technique shows promising results in complex scenarios. Existing LCDAS mainly focus on the blind spot detection aspect of the lane change decision aid process, as comprehensive assessment of the 360 degree scene around the vehicle is still a rather complex and computationally demanding process. In this work, a two-camera system consisting of front and rear facing monocular cameras has been employed to establish a near 360 degree field of view. The proximity of vehicles surrounding the host vehicle and their speeds were incorporated into a Gaussian risk function for estimating the risks posed by different vehicles in the front and rear views. A state machine was also introduced to monitor the blind spot region by combining the risk information of vehicles in front and rear RoIs. Finally, the proposed techniques lend well for compute-efficient realizations and simulations on real video sequences show that the integrated framework can be deployed to deterministically evaluate the risks associated with lane change maneuvers. DOCTOR OF PHILOSOPHY (SCE) 2013-11-08T04:55:19Z 2013-11-08T04:55:19Z 2013 2013 Thesis Satzoda Ravi Kumar. (2013). Embedded computing techniques for vision-based lane change decision aid systems. Doctoral thesis, Nanyang Technological University, Singapore. https://hdl.handle.net/10356/54953 10.32657/10356/54953 en 236 p. application/pdf |