Multi-view fusion-based 3D object detection for robot indoor scene perception

To autonomously move and operate objects in cluttered indoor environments, a service robot requires the ability of 3D scene perception. Though 3D object detection can provide an object-level environmental description to fill this gap, a robot always encounters incomplete object observation, recurrin...

Full description

Saved in:
Bibliographic Details
Main Authors: Wang, Li, Li, Ruifeng, Sun, Jingwen, Liu, Xingxing, Zhao, Lijun, Seah, Hock Soon, Quah, Chee Kwang, Tandianus, Budianto
Other Authors: School of Computer Science and Engineering
Format: Article
Language:English
Published: 2020
Subjects:
Online Access:https://hdl.handle.net/10356/142133
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-142133
record_format dspace
spelling sg-ntu-dr.10356-1421332020-06-16T05:21:41Z Multi-view fusion-based 3D object detection for robot indoor scene perception Wang, Li Li, Ruifeng Sun, Jingwen Liu, Xingxing Zhao, Lijun Seah, Hock Soon Quah, Chee Kwang Tandianus, Budianto School of Computer Science and Engineering Engineering::Computer science and engineering 3D Object Detection Multi-view Fusion To autonomously move and operate objects in cluttered indoor environments, a service robot requires the ability of 3D scene perception. Though 3D object detection can provide an object-level environmental description to fill this gap, a robot always encounters incomplete object observation, recurring detections of the same object, error in detection, or intersection between objects when conducting detection continuously in a cluttered room. To solve these problems, we propose a two-stage 3D object detection algorithm which is to fuse multiple views of 3D object point clouds in the first stage and to eliminate unreasonable and intersection detections in the second stage. For each view, the robot performs a 2D object semantic segmentation and obtains 3D object point clouds. Then, an unsupervised segmentation method called Locally Convex Connected Patches (LCCP) is utilized to segment the object accurately from the background. Subsequently, the Manhattan Frame estimation is implemented to calculate the main orientation of the object and subsequently, the 3D object bounding box can be obtained. To deal with the detected objects in multiple views, we construct an object database and propose an object fusion criterion to maintain it automatically. Thus, the same object observed in multi-view is fused together and a more accurate bounding box can be calculated. Finally, we propose an object filtering approach based on prior knowledge to remove incorrect and intersecting objects in the object dataset. Experiments are carried out on both SceneNN dataset and a real indoor environment to verify the stability and accuracy of 3D semantic segmentation and bounding box detection of the object with multi-view fusion. NRF (Natl Research Foundation, S’pore) Published version 2020-06-16T05:21:41Z 2020-06-16T05:21:41Z 2019 Journal Article Wang, L., Li, R., Sun, J., Liu, X., Zhao, L., Seah, H. S., . . . Tandianus, B. (2019). Multi-view fusion-based 3D object detection for robot indoor scene perception. Sensors, 19(19), 4092-. doi:10.3390/s19194092 1424-8220 https://hdl.handle.net/10356/142133 10.3390/s19194092 31546674 2-s2.0-85072586553 19 19 en Sensors © 2019 The Authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/). application/pdf
institution Nanyang Technological University
building NTU Library
country Singapore
collection DR-NTU
language English
topic Engineering::Computer science and engineering
3D Object Detection
Multi-view Fusion
spellingShingle Engineering::Computer science and engineering
3D Object Detection
Multi-view Fusion
Wang, Li
Li, Ruifeng
Sun, Jingwen
Liu, Xingxing
Zhao, Lijun
Seah, Hock Soon
Quah, Chee Kwang
Tandianus, Budianto
Multi-view fusion-based 3D object detection for robot indoor scene perception
description To autonomously move and operate objects in cluttered indoor environments, a service robot requires the ability of 3D scene perception. Though 3D object detection can provide an object-level environmental description to fill this gap, a robot always encounters incomplete object observation, recurring detections of the same object, error in detection, or intersection between objects when conducting detection continuously in a cluttered room. To solve these problems, we propose a two-stage 3D object detection algorithm which is to fuse multiple views of 3D object point clouds in the first stage and to eliminate unreasonable and intersection detections in the second stage. For each view, the robot performs a 2D object semantic segmentation and obtains 3D object point clouds. Then, an unsupervised segmentation method called Locally Convex Connected Patches (LCCP) is utilized to segment the object accurately from the background. Subsequently, the Manhattan Frame estimation is implemented to calculate the main orientation of the object and subsequently, the 3D object bounding box can be obtained. To deal with the detected objects in multiple views, we construct an object database and propose an object fusion criterion to maintain it automatically. Thus, the same object observed in multi-view is fused together and a more accurate bounding box can be calculated. Finally, we propose an object filtering approach based on prior knowledge to remove incorrect and intersecting objects in the object dataset. Experiments are carried out on both SceneNN dataset and a real indoor environment to verify the stability and accuracy of 3D semantic segmentation and bounding box detection of the object with multi-view fusion.
author2 School of Computer Science and Engineering
author_facet School of Computer Science and Engineering
Wang, Li
Li, Ruifeng
Sun, Jingwen
Liu, Xingxing
Zhao, Lijun
Seah, Hock Soon
Quah, Chee Kwang
Tandianus, Budianto
format Article
author Wang, Li
Li, Ruifeng
Sun, Jingwen
Liu, Xingxing
Zhao, Lijun
Seah, Hock Soon
Quah, Chee Kwang
Tandianus, Budianto
author_sort Wang, Li
title Multi-view fusion-based 3D object detection for robot indoor scene perception
title_short Multi-view fusion-based 3D object detection for robot indoor scene perception
title_full Multi-view fusion-based 3D object detection for robot indoor scene perception
title_fullStr Multi-view fusion-based 3D object detection for robot indoor scene perception
title_full_unstemmed Multi-view fusion-based 3D object detection for robot indoor scene perception
title_sort multi-view fusion-based 3d object detection for robot indoor scene perception
publishDate 2020
url https://hdl.handle.net/10356/142133
_version_ 1681056979959676928