Lane perception in rainy conditions for autonomous vehicles
One of the critical tasks engaged by Autonomous Vehicles (AV) and Advanced Driver Assistance Systems (ADAS) is the detection and localization of lanes. To effectively automate lane keeping, route planning, and lane changing, it is essential to know the number of lanes, their orientations, and trajec...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Thesis-Doctor of Philosophy |
Language: | English |
Published: |
Nanyang Technological University
2024
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/173381 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
id |
sg-ntu-dr.10356-173381 |
---|---|
record_format |
dspace |
institution |
Nanyang Technological University |
building |
NTU Library |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
NTU Library |
collection |
DR-NTU |
language |
English |
topic |
Engineering::Computer science and engineering::Computing methodologies::Image processing and computer vision Engineering::Computer science and engineering::Computing methodologies::Pattern recognition Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence |
spellingShingle |
Engineering::Computer science and engineering::Computing methodologies::Image processing and computer vision Engineering::Computer science and engineering::Computing methodologies::Pattern recognition Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence Mahendran, Prabhu Shankar Lane perception in rainy conditions for autonomous vehicles |
description |
One of the critical tasks engaged by Autonomous Vehicles (AV) and Advanced Driver Assistance Systems (ADAS) is the detection and localization of lanes. To effectively automate lane keeping, route planning, and lane changing, it is essential to know the number of lanes, their orientations, and trajectories. As a result, lane detection has been of immense research interest in recent years. However, lane detection in rainy conditions remains underexplored. Rain particles interact with the visual sensors in numerous ways making the extraction of strong discriminative features even more difficult and antagonizing the detection task. De-raining has been intensively studied in the past to combat these disruptive effects. However, recent studies suggest that de-raining may further destroy some discriminative features. There is also a shortage of research work that focuses on lane detection in rainy conditions. Together with this, there is a lack of annotated lane detection datasets in rainy weather.
Inspired by this, this thesis presents a novel lane detection framework that targets lane enhancement, segmentation, and detection in rainy conditions. First, a unique enhancement method, Deep Residual Enhancement Network (DRENet), is developed, which amplifies lanes over the noisy signal. A multiplicative and an additive factors are learnt instead of direct manipulation of raw pixel values. This enables better focus on the large and connected structures of lanes and helps to avoid non-lane sites. An auxiliary foreground segmentation task is introduced to further cement attention on lane regions. As it is challenging and costly to build an applicable lane detection dataset in rainy weather, a unique training framework is adopted. This process utilizes unannotated traffic scenes in rain and an established lane detection dataset in non-rainy weather.
To further benefit lane segmentation capabilities in rain, the Joint Enhancement and Segmentation Network (JESNet) is presented next. Inspired by weight sharing in multi-task networks, JESNet enhances and segments lane regions in a single network. To improve the generalizability of the lane enhancement process, adversarial training is introduced with a specialized multi-receptive field discriminator. This audist semantic properties at different scales and depths in the lane-enhanced image without downsampling operations. Attention to global structures is induced through a unique combination of multi-scale segmentation and dilation operations. As a result, the vanishing region of the road may be implicitly learnt without a manual vanishing point label.
Lastly, adaptive postprocessing is necessary to account for false positive and false negative predictions that are common during rainy conditions. To achieve this, a novel lane construction module is designed. The proposed module first generates sparse lane points using the statistics of local neighborhoods of the segmentation maps. Missed segments and false positives are compensated using local spatial statistics cues. Subsequently, lanes are constructed by fitting a polynomial curve over the predicted points. As such, the number of lanes in the rainy scene, their orientations, and their trajectories may be defined. As only segmentation maps are used, this module is effectively weather agnostic.
To evaluate our lane detection framework in rainy conditions, tests were conducted on the proposed rain-translated dataset, which is based on the TuSimple dataset. Several evaluations were also conducted on our own unannotated rainy RainSG dataset, the clear TuSimple dataset, as well as the original clear and several rain-translated portions of the CULane database. Numerical results have shown comparable results with the state-of-the-art in clear settings, while up to 5\% and 14\% improvements to lane detection accuracy and false positive rates were obtained by the combined JESNet and lane construction module, against a leading detection method trained on our rain-translated TuSimple dataset. The proposed method achieved good performance under several common scenes in the rainy settings. Similarly, good performance was observed in challenging conditions from the clear CULane dataset. |
author2 |
Soong Boon Hee |
author_facet |
Soong Boon Hee Mahendran, Prabhu Shankar |
format |
Thesis-Doctor of Philosophy |
author |
Mahendran, Prabhu Shankar |
author_sort |
Mahendran, Prabhu Shankar |
title |
Lane perception in rainy conditions for autonomous vehicles |
title_short |
Lane perception in rainy conditions for autonomous vehicles |
title_full |
Lane perception in rainy conditions for autonomous vehicles |
title_fullStr |
Lane perception in rainy conditions for autonomous vehicles |
title_full_unstemmed |
Lane perception in rainy conditions for autonomous vehicles |
title_sort |
lane perception in rainy conditions for autonomous vehicles |
publisher |
Nanyang Technological University |
publishDate |
2024 |
url |
https://hdl.handle.net/10356/173381 |
_version_ |
1789968703518932992 |
spelling |
sg-ntu-dr.10356-1733812024-02-02T15:42:09Z Lane perception in rainy conditions for autonomous vehicles Mahendran, Prabhu Shankar Soong Boon Hee School of Electrical and Electronic Engineering EBHSOONG@ntu.edu.sg Engineering::Computer science and engineering::Computing methodologies::Image processing and computer vision Engineering::Computer science and engineering::Computing methodologies::Pattern recognition Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence One of the critical tasks engaged by Autonomous Vehicles (AV) and Advanced Driver Assistance Systems (ADAS) is the detection and localization of lanes. To effectively automate lane keeping, route planning, and lane changing, it is essential to know the number of lanes, their orientations, and trajectories. As a result, lane detection has been of immense research interest in recent years. However, lane detection in rainy conditions remains underexplored. Rain particles interact with the visual sensors in numerous ways making the extraction of strong discriminative features even more difficult and antagonizing the detection task. De-raining has been intensively studied in the past to combat these disruptive effects. However, recent studies suggest that de-raining may further destroy some discriminative features. There is also a shortage of research work that focuses on lane detection in rainy conditions. Together with this, there is a lack of annotated lane detection datasets in rainy weather. Inspired by this, this thesis presents a novel lane detection framework that targets lane enhancement, segmentation, and detection in rainy conditions. First, a unique enhancement method, Deep Residual Enhancement Network (DRENet), is developed, which amplifies lanes over the noisy signal. A multiplicative and an additive factors are learnt instead of direct manipulation of raw pixel values. This enables better focus on the large and connected structures of lanes and helps to avoid non-lane sites. An auxiliary foreground segmentation task is introduced to further cement attention on lane regions. As it is challenging and costly to build an applicable lane detection dataset in rainy weather, a unique training framework is adopted. This process utilizes unannotated traffic scenes in rain and an established lane detection dataset in non-rainy weather. To further benefit lane segmentation capabilities in rain, the Joint Enhancement and Segmentation Network (JESNet) is presented next. Inspired by weight sharing in multi-task networks, JESNet enhances and segments lane regions in a single network. To improve the generalizability of the lane enhancement process, adversarial training is introduced with a specialized multi-receptive field discriminator. This audist semantic properties at different scales and depths in the lane-enhanced image without downsampling operations. Attention to global structures is induced through a unique combination of multi-scale segmentation and dilation operations. As a result, the vanishing region of the road may be implicitly learnt without a manual vanishing point label. Lastly, adaptive postprocessing is necessary to account for false positive and false negative predictions that are common during rainy conditions. To achieve this, a novel lane construction module is designed. The proposed module first generates sparse lane points using the statistics of local neighborhoods of the segmentation maps. Missed segments and false positives are compensated using local spatial statistics cues. Subsequently, lanes are constructed by fitting a polynomial curve over the predicted points. As such, the number of lanes in the rainy scene, their orientations, and their trajectories may be defined. As only segmentation maps are used, this module is effectively weather agnostic. To evaluate our lane detection framework in rainy conditions, tests were conducted on the proposed rain-translated dataset, which is based on the TuSimple dataset. Several evaluations were also conducted on our own unannotated rainy RainSG dataset, the clear TuSimple dataset, as well as the original clear and several rain-translated portions of the CULane database. Numerical results have shown comparable results with the state-of-the-art in clear settings, while up to 5\% and 14\% improvements to lane detection accuracy and false positive rates were obtained by the combined JESNet and lane construction module, against a leading detection method trained on our rain-translated TuSimple dataset. The proposed method achieved good performance under several common scenes in the rainy settings. Similarly, good performance was observed in challenging conditions from the clear CULane dataset. Doctor of Philosophy 2024-01-31T01:30:21Z 2024-01-31T01:30:21Z 2024 Thesis-Doctor of Philosophy Mahendran, P. S. (2024). Lane perception in rainy conditions for autonomous vehicles. Doctoral thesis, Nanyang Technological University, Singapore. https://hdl.handle.net/10356/173381 https://hdl.handle.net/10356/173381 10.32657/10356/173381 en This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0). application/pdf Nanyang Technological University |