Fusion of Velodyne and camera data for scene parsing

The fusion of information gathered from multiple sources is essential to build a comprehensive situation picture for autonomous ground vehicles. In this paper, an approach which performs scene classification and data fusion for 3D-LIDAR scanner (Velodyne HDL-64E) and video camera is described. A geo...

Full description

Saved in:
Bibliographic Details
Main Authors: Zhao, Gangqiang, Xiao, Xuhong, Yuan, Junsong
Other Authors: School of Electrical and Electronic Engineering
Format: Conference or Workshop Item
Language:English
Published: 2013
Subjects:
Online Access:https://hdl.handle.net/10356/100802
http://hdl.handle.net/10220/17984
http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=6289941
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:The fusion of information gathered from multiple sources is essential to build a comprehensive situation picture for autonomous ground vehicles. In this paper, an approach which performs scene classification and data fusion for 3D-LIDAR scanner (Velodyne HDL-64E) and video camera is described. A geometry segmentation algorithm is proposed for detection of obstacles and ground area from data collected by the Velodyne. In the meantime, the corresponding image collected by video camera is classified patch by patch into more detailed categories. The final situation picture is obtained by fusing the classification results of Velodyne data and that of images using the fuzzy logic inference framework. The proposed approach was evaluated with datasets collected by our autonomous ground vehicle testbed in the rural area. The fused results are more reliable and more completable than those provided by individual sensors.