Occlusion-free road segmentation leveraging semantics for autonomous vehicles

The deep convolutional neural network has led the trend of vision-based road detection, however, obtaining a full road area despite the occlusion from monocular vision remains challenging due to the dynamic scenes in autonomous driving. Inferring the occluded road area requires a comprehensive under...

Full description

Saved in:
Bibliographic Details
Main Authors: Wang, Kewei, Yan, Fuwu, Zou, Bin, Tang, Luqi, Yuan, Quan, Lv, Chen
Other Authors: School of Mechanical and Aerospace Engineering
Format: Article
Language:English
Published: 2020
Subjects:
Online Access:https://hdl.handle.net/10356/142140
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-142140
record_format dspace
spelling sg-ntu-dr.10356-1421402023-03-04T17:23:05Z Occlusion-free road segmentation leveraging semantics for autonomous vehicles Wang, Kewei Yan, Fuwu Zou, Bin Tang, Luqi Yuan, Quan Lv, Chen School of Mechanical and Aerospace Engineering Engineering::Mechanical engineering Autonomous Vehicles Scene Understanding The deep convolutional neural network has led the trend of vision-based road detection, however, obtaining a full road area despite the occlusion from monocular vision remains challenging due to the dynamic scenes in autonomous driving. Inferring the occluded road area requires a comprehensive understanding of the geometry and the semantics of the visible scene. To this end, we create a small but effective dataset based on the KITTI dataset named KITTI-OFRS (KITTI-occlusion-free road segmentation) dataset and propose a lightweight and efficient, fully convolutional neural network called OFRSNet (occlusion-free road segmentation network) that learns to predict occluded portions of the road in the semantic domain by looking around foreground objects and visible road layout. In particular, the global context module is used to build up the down-sampling and joint context up-sampling block in our network, which promotes the performance of the network. Moreover, a spatially-weighted cross-entropy loss is designed to significantly increases the accuracy of this task. Extensive experiments on different datasets verify the effectiveness of the proposed approach, and comparisons with current excellent methods show that the proposed method outperforms the baseline models by obtaining a better trade-off between accuracy and runtime, which makes our approach is able to be applied to autonomous vehicles in real-time. Published version 2020-06-16T06:09:56Z 2020-06-16T06:09:56Z 2019 Journal Article Wang, K., Yan, F., Zou, B., Tang, L., Yuan, Q., & Lv, C. (2019). Occlusion-free road segmentation leveraging semantics for autonomous vehicles. Sensors, 19(21), 4711-. doi:10.3390/s19214711 1424-8220 https://hdl.handle.net/10356/142140 10.3390/s19214711 31671547 2-s2.0-85074324216 21 19 en Sensors © 2019 The Authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/). application/pdf
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Engineering::Mechanical engineering
Autonomous Vehicles
Scene Understanding
spellingShingle Engineering::Mechanical engineering
Autonomous Vehicles
Scene Understanding
Wang, Kewei
Yan, Fuwu
Zou, Bin
Tang, Luqi
Yuan, Quan
Lv, Chen
Occlusion-free road segmentation leveraging semantics for autonomous vehicles
description The deep convolutional neural network has led the trend of vision-based road detection, however, obtaining a full road area despite the occlusion from monocular vision remains challenging due to the dynamic scenes in autonomous driving. Inferring the occluded road area requires a comprehensive understanding of the geometry and the semantics of the visible scene. To this end, we create a small but effective dataset based on the KITTI dataset named KITTI-OFRS (KITTI-occlusion-free road segmentation) dataset and propose a lightweight and efficient, fully convolutional neural network called OFRSNet (occlusion-free road segmentation network) that learns to predict occluded portions of the road in the semantic domain by looking around foreground objects and visible road layout. In particular, the global context module is used to build up the down-sampling and joint context up-sampling block in our network, which promotes the performance of the network. Moreover, a spatially-weighted cross-entropy loss is designed to significantly increases the accuracy of this task. Extensive experiments on different datasets verify the effectiveness of the proposed approach, and comparisons with current excellent methods show that the proposed method outperforms the baseline models by obtaining a better trade-off between accuracy and runtime, which makes our approach is able to be applied to autonomous vehicles in real-time.
author2 School of Mechanical and Aerospace Engineering
author_facet School of Mechanical and Aerospace Engineering
Wang, Kewei
Yan, Fuwu
Zou, Bin
Tang, Luqi
Yuan, Quan
Lv, Chen
format Article
author Wang, Kewei
Yan, Fuwu
Zou, Bin
Tang, Luqi
Yuan, Quan
Lv, Chen
author_sort Wang, Kewei
title Occlusion-free road segmentation leveraging semantics for autonomous vehicles
title_short Occlusion-free road segmentation leveraging semantics for autonomous vehicles
title_full Occlusion-free road segmentation leveraging semantics for autonomous vehicles
title_fullStr Occlusion-free road segmentation leveraging semantics for autonomous vehicles
title_full_unstemmed Occlusion-free road segmentation leveraging semantics for autonomous vehicles
title_sort occlusion-free road segmentation leveraging semantics for autonomous vehicles
publishDate 2020
url https://hdl.handle.net/10356/142140
_version_ 1759857284669767680