Global context aware convolutions for 3D point cloud understanding

Recent advances in deep learning for 3D point clouds have shown great promises in scene understanding tasks thanks to the introduction of convolution operators to consume 3D point clouds directly in a neural network. Point cloud data, however, could have arbitrary rotations, especially those acquire...

Full description

Saved in:
Bibliographic Details
Main Authors: ZHANG, Zhiyuan, HUA, Binh-Son, CHEN, Wei, TIAN, Yibin, YEUNG, Sai-Kit
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2020
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/7941
https://ink.library.smu.edu.sg/context/sis_research/article/8944/viewcontent/812800a210.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
id sg-smu-ink.sis_research-8944
record_format dspace
spelling sg-smu-ink.sis_research-89442023-07-20T07:49:40Z Global context aware convolutions for 3D point cloud understanding ZHANG, Zhiyuan HUA, Binh-Son CHEN, Wei TIAN, Yibin YEUNG, Sai-Kit Recent advances in deep learning for 3D point clouds have shown great promises in scene understanding tasks thanks to the introduction of convolution operators to consume 3D point clouds directly in a neural network. Point cloud data, however, could have arbitrary rotations, especially those acquired from 3D scanning. Recent works show that it is possible to design point cloud convolutions with rotation invariance property, but such methods generally do not perform as well as translation-invariant only convolution. We found that a key reason is that compared to point coordinates, rotation-invariant features consumed by point cloud convolution are not as distinctive. To address this problem, we propose a novel convolution operator that enhances feature distinction by integrating global context information from the input point cloud to the convolution. To this end, a globally weighted local reference frame is constructed in each point neighborhood in which the local point set is decomposed into bins. Anchor points are generated in each bin to represent global shape features. A convolution can then be performed to transform the points and anchor features into final rotation-invariant features. We conduct several experiments on point cloud classification, part segmentation, shape retrieval, and normals estimation to evaluate our convolution, which achieves state-of-the-art accuracy under challenging rotations. 2020-11-01T07:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/7941 info:doi/10.1109/3dv50981.2020.00031 https://ink.library.smu.edu.sg/context/sis_research/article/8944/viewcontent/812800a210.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Artificial Intelligence and Robotics Graphics and Human Computer Interfaces
institution Singapore Management University
building SMU Libraries
continent Asia
country Singapore
Singapore
content_provider SMU Libraries
collection InK@SMU
language English
topic Artificial Intelligence and Robotics
Graphics and Human Computer Interfaces
spellingShingle Artificial Intelligence and Robotics
Graphics and Human Computer Interfaces
ZHANG, Zhiyuan
HUA, Binh-Son
CHEN, Wei
TIAN, Yibin
YEUNG, Sai-Kit
Global context aware convolutions for 3D point cloud understanding
description Recent advances in deep learning for 3D point clouds have shown great promises in scene understanding tasks thanks to the introduction of convolution operators to consume 3D point clouds directly in a neural network. Point cloud data, however, could have arbitrary rotations, especially those acquired from 3D scanning. Recent works show that it is possible to design point cloud convolutions with rotation invariance property, but such methods generally do not perform as well as translation-invariant only convolution. We found that a key reason is that compared to point coordinates, rotation-invariant features consumed by point cloud convolution are not as distinctive. To address this problem, we propose a novel convolution operator that enhances feature distinction by integrating global context information from the input point cloud to the convolution. To this end, a globally weighted local reference frame is constructed in each point neighborhood in which the local point set is decomposed into bins. Anchor points are generated in each bin to represent global shape features. A convolution can then be performed to transform the points and anchor features into final rotation-invariant features. We conduct several experiments on point cloud classification, part segmentation, shape retrieval, and normals estimation to evaluate our convolution, which achieves state-of-the-art accuracy under challenging rotations.
format text
author ZHANG, Zhiyuan
HUA, Binh-Son
CHEN, Wei
TIAN, Yibin
YEUNG, Sai-Kit
author_facet ZHANG, Zhiyuan
HUA, Binh-Son
CHEN, Wei
TIAN, Yibin
YEUNG, Sai-Kit
author_sort ZHANG, Zhiyuan
title Global context aware convolutions for 3D point cloud understanding
title_short Global context aware convolutions for 3D point cloud understanding
title_full Global context aware convolutions for 3D point cloud understanding
title_fullStr Global context aware convolutions for 3D point cloud understanding
title_full_unstemmed Global context aware convolutions for 3D point cloud understanding
title_sort global context aware convolutions for 3d point cloud understanding
publisher Institutional Knowledge at Singapore Management University
publishDate 2020
url https://ink.library.smu.edu.sg/sis_research/7941
https://ink.library.smu.edu.sg/context/sis_research/article/8944/viewcontent/812800a210.pdf
_version_ 1772829246344921088