RISurConv : Rotation invariant surface attention-augmented convolutions for 3D point cloud classification and segmentation

Despite the progress on 3D point cloud deep learning, most prior works focus on learning features that are invariant to translation and point permutation, and very limited efforts have been devoted for rotation invariant property. Several recent studies achieve rotation invariance at the cost of low...

Full description

Saved in:
Bibliographic Details
Main Authors: ZHANG, Zhiyuan, YANG, Licheng, XIANG Zhiyu
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2024
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/9747
https://ink.library.smu.edu.sg/context/sis_research/article/10747/viewcontent/2408.06110v1.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
Description
Summary:Despite the progress on 3D point cloud deep learning, most prior works focus on learning features that are invariant to translation and point permutation, and very limited efforts have been devoted for rotation invariant property. Several recent studies achieve rotation invariance at the cost of lower accuracies. In this work, we close this gap by proposing a novel yet effective rotation invariant architecture for 3D point cloud classification and segmentation. Instead of traditional pointwise operations, we construct local triangle surfaces to capture more detailed surface structure, based on which we can extract highly expressive rotation invariant surface properties which are then integrated into an attention-augmented convolution operator named RISurConv to generate refined attention features via self-attention layers. Based on RISurConv we build an effective neural network for 3D point cloud analysis that is invariant to arbitrary rotations while maintaining high accuracy. We verify the performance on various benchmarks with supreme results obtained surpassing the previous state-of-the-art by a large margin. We achieve an overall accuracy of 96.0{\%} (+4.7{\%}) on ModelNet40, 93.1{\%} (+12.8{\%}) on ScanObjectNN, and class accuracies of 91.5{\%} (+3.6{\%}), 82.7{\%} (+5.1{\%}), and 78.5{\%} (+9.2{\%}) on the three categories of the FG3D dataset for the fine-grained classification task. Additionally, we achieve 81.5{\%} (+1.0{\%}) mIoU on ShapeNet for the segmentation task.