Multi point-voxel convolution (MPVConv) for deep learning on point clouds

The existing 3D deep learning methods adopt either individual point-based features or local-neighboring voxel-based features, and demonstrate great potential for processing 3D data. However, the point-based models are inefficient due to the unordered nature of point clouds and the voxel-based models...

Full description

Saved in:
Bibliographic Details
Main Authors: Zhou, Wei, Zhang, Xiaodan, Hao, Xingxing, Wang, Dekui, He, Ying
Other Authors: School of Computer Science and Engineering
Format: Article
Language:English
Published: 2023
Subjects:
Online Access:https://hdl.handle.net/10356/172090
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:The existing 3D deep learning methods adopt either individual point-based features or local-neighboring voxel-based features, and demonstrate great potential for processing 3D data. However, the point-based models are inefficient due to the unordered nature of point clouds and the voxel-based models suffer from large information loss. Motivated by the success of recent point-voxel representation, such as PVCNN and DRINet, we propose a new convolutional neural network, called Multi Point-Voxel Convolution (MPVConv), for deep learning on point clouds. Integrating both the advantages of voxel and point-based methods, MPVConv can effectively increase the neighboring collection between point-based features and also promote independence among voxel-based features. Extensive experiments on benchmark datasets such as ShapeNet Part, S3DIS and KITTI for various tasks show that MPVConv improves the accuracy of the backbone (PointNet) by up to 36%, and achieves higher accuracy than the voxel-based model with up to 34× speedups. In addition, MPVConv outperforms the state-of-the-art point-based models with up to 8× speedups. Also, our MPVConv only needs 65% of the GPU memory required by the latest point-voxel-based model (DRINet). The source code of our method is attached in https://github.com/NWUzhouwei/MPVConv.