T3DNet: compressing point cloud models for lightweight 3-D recognition

The 3-D point cloud has been widely used in many mobile application scenarios, including autonomous driving and 3-D sensing on mobile devices. However, existing 3-D point cloud models tend to be large and cumbersome, making them hard to deploy on edged devices due to their high memory requirements a...

Full description

Saved in:
Bibliographic Details
Main Authors: Yang, Zhiyuan, Zhou, Yunjiao, Xie, Lihua, Yang, Jianfei
Other Authors: School of Electrical and Electronic Engineering
Format: Article
Language:English
Published: 2025
Subjects:
Online Access:https://hdl.handle.net/10356/182680
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:The 3-D point cloud has been widely used in many mobile application scenarios, including autonomous driving and 3-D sensing on mobile devices. However, existing 3-D point cloud models tend to be large and cumbersome, making them hard to deploy on edged devices due to their high memory requirements and nonreal-time latency. There has been a lack of research on how to compress 3-D point cloud models into lightweight models. In this article, we propose a method called T3DNet (tiny 3-D network with augmentation and distillation) to address this issue. We find that the tiny model after network augmentation is much easier for a teacher to distill. Instead of gradually reducing the parameters through techniques, such as pruning or quantization, we predefine a tiny model and improve its performance through auxiliary supervision from augmented networks and the original model. We evaluate our method on several public datasets, including ModelNet40, ShapeNet, and ScanObjectNN. Our method can achieve high compression rates without significant accuracy sacrifice, achieving state-of-the-art performances on three datasets against existing methods. Amazingly, our T3DNet is 58× smaller and 54× faster than the original model yet with only 1.4% accuracy descent on the ModelNet40 dataset. Our code is available at https://github.com/Zhiyuan002/T3DNet.