Fusion of Velodyne and camera data for scene parsing
The fusion of information gathered from multiple sources is essential to build a comprehensive situation picture for autonomous ground vehicles. In this paper, an approach which performs scene classification and data fusion for 3D-LIDAR scanner (Velodyne HDL-64E) and video camera is described. A geo...
Saved in:
Main Authors: | , , |
---|---|
Other Authors: | |
Format: | Conference or Workshop Item |
Language: | English |
Published: |
2013
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/100802 http://hdl.handle.net/10220/17984 http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=6289941 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
id |
sg-ntu-dr.10356-100802 |
---|---|
record_format |
dspace |
spelling |
sg-ntu-dr.10356-1008022019-12-06T20:28:34Z Fusion of Velodyne and camera data for scene parsing Zhao, Gangqiang Xiao, Xuhong Yuan, Junsong School of Electrical and Electronic Engineering International Conference on Information Fusion (15th : 2012 : Singapore) DRNTU::Engineering::Electrical and electronic engineering The fusion of information gathered from multiple sources is essential to build a comprehensive situation picture for autonomous ground vehicles. In this paper, an approach which performs scene classification and data fusion for 3D-LIDAR scanner (Velodyne HDL-64E) and video camera is described. A geometry segmentation algorithm is proposed for detection of obstacles and ground area from data collected by the Velodyne. In the meantime, the corresponding image collected by video camera is classified patch by patch into more detailed categories. The final situation picture is obtained by fusing the classification results of Velodyne data and that of images using the fuzzy logic inference framework. The proposed approach was evaluated with datasets collected by our autonomous ground vehicle testbed in the rural area. The fused results are more reliable and more completable than those provided by individual sensors. Accepted version 2013-12-02T08:25:35Z 2019-12-06T20:28:34Z 2013-12-02T08:25:35Z 2019-12-06T20:28:34Z 2012 2012 Conference Paper Zhao, G., Xiao, X., & Yuan, J. (2012). Fusion of Velodyne and Camera Data for Scene Parsing. 15th International Conference on Information Fusion (FUSION), pp.1172-1179. https://hdl.handle.net/10356/100802 http://hdl.handle.net/10220/17984 http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=6289941 en © 2012 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. The published version is available at: [http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=6289941]. 8 p. This work was supported in part by the DSO-NTU project M4060969.040, as well as Nanyang Assistant Professorship to Dr. Junsong Yuan. application/pdf |
institution |
Nanyang Technological University |
building |
NTU Library |
country |
Singapore |
collection |
DR-NTU |
language |
English |
topic |
DRNTU::Engineering::Electrical and electronic engineering |
spellingShingle |
DRNTU::Engineering::Electrical and electronic engineering Zhao, Gangqiang Xiao, Xuhong Yuan, Junsong Fusion of Velodyne and camera data for scene parsing |
description |
The fusion of information gathered from multiple sources is essential to build a comprehensive situation picture for autonomous ground vehicles. In this paper, an approach which performs scene classification and data fusion for 3D-LIDAR scanner (Velodyne HDL-64E) and video camera is described. A geometry segmentation algorithm is proposed for detection of obstacles and ground area from data collected by the Velodyne. In the meantime, the corresponding image collected by video camera is classified patch by patch into more detailed categories. The final situation picture is obtained by fusing the classification results of Velodyne data and that of images using the fuzzy logic inference framework. The proposed approach was evaluated with datasets collected by our autonomous ground vehicle testbed in the rural area. The fused results are more reliable and more completable than those provided by individual sensors. |
author2 |
School of Electrical and Electronic Engineering |
author_facet |
School of Electrical and Electronic Engineering Zhao, Gangqiang Xiao, Xuhong Yuan, Junsong |
format |
Conference or Workshop Item |
author |
Zhao, Gangqiang Xiao, Xuhong Yuan, Junsong |
author_sort |
Zhao, Gangqiang |
title |
Fusion of Velodyne and camera data for scene parsing |
title_short |
Fusion of Velodyne and camera data for scene parsing |
title_full |
Fusion of Velodyne and camera data for scene parsing |
title_fullStr |
Fusion of Velodyne and camera data for scene parsing |
title_full_unstemmed |
Fusion of Velodyne and camera data for scene parsing |
title_sort |
fusion of velodyne and camera data for scene parsing |
publishDate |
2013 |
url |
https://hdl.handle.net/10356/100802 http://hdl.handle.net/10220/17984 http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=6289941 |
_version_ |
1681038121243770880 |