T3DNet: compressing point cloud models for lightweight 3-D recognition
The 3-D point cloud has been widely used in many mobile application scenarios, including autonomous driving and 3-D sensing on mobile devices. However, existing 3-D point cloud models tend to be large and cumbersome, making them hard to deploy on edged devices due to their high memory requirements a...
Saved in:
Main Authors: | , , , |
---|---|
Other Authors: | |
Format: | Article |
Language: | English |
Published: |
2025
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/182680 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
id |
sg-ntu-dr.10356-182680 |
---|---|
record_format |
dspace |
spelling |
sg-ntu-dr.10356-1826802025-02-17T04:29:07Z T3DNet: compressing point cloud models for lightweight 3-D recognition Yang, Zhiyuan Zhou, Yunjiao Xie, Lihua Yang, Jianfei School of Electrical and Electronic Engineering School of Mechanical and Aerospace Engineering Engineering 3-D model compression Knowledge distillation The 3-D point cloud has been widely used in many mobile application scenarios, including autonomous driving and 3-D sensing on mobile devices. However, existing 3-D point cloud models tend to be large and cumbersome, making them hard to deploy on edged devices due to their high memory requirements and nonreal-time latency. There has been a lack of research on how to compress 3-D point cloud models into lightweight models. In this article, we propose a method called T3DNet (tiny 3-D network with augmentation and distillation) to address this issue. We find that the tiny model after network augmentation is much easier for a teacher to distill. Instead of gradually reducing the parameters through techniques, such as pruning or quantization, we predefine a tiny model and improve its performance through auxiliary supervision from augmented networks and the original model. We evaluate our method on several public datasets, including ModelNet40, ShapeNet, and ScanObjectNN. Our method can achieve high compression rates without significant accuracy sacrifice, achieving state-of-the-art performances on three datasets against existing methods. Amazingly, our T3DNet is 58× smaller and 54× faster than the original model yet with only 1.4% accuracy descent on the ModelNet40 dataset. Our code is available at https://github.com/Zhiyuan002/T3DNet. Nanyang Technological University This work was supported by a Start-Up Grant at Nanyang Technological University. 2025-02-17T04:29:07Z 2025-02-17T04:29:07Z 2025 Journal Article Yang, Z., Zhou, Y., Xie, L. & Yang, J. (2025). T3DNet: compressing point cloud models for lightweight 3-D recognition. IEEE Transactions On Cybernetics, 55(2), 526-536. https://dx.doi.org/10.1109/TCYB.2024.3487220 2168-2267 https://hdl.handle.net/10356/182680 10.1109/TCYB.2024.3487220 2-s2.0-85210276953 2 55 526 536 en NTU SUG IEEE Transactions on Cybernetics © 2024 IEEE. All rights reserved. |
institution |
Nanyang Technological University |
building |
NTU Library |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
NTU Library |
collection |
DR-NTU |
language |
English |
topic |
Engineering 3-D model compression Knowledge distillation |
spellingShingle |
Engineering 3-D model compression Knowledge distillation Yang, Zhiyuan Zhou, Yunjiao Xie, Lihua Yang, Jianfei T3DNet: compressing point cloud models for lightweight 3-D recognition |
description |
The 3-D point cloud has been widely used in many mobile application scenarios, including autonomous driving and 3-D sensing on mobile devices. However, existing 3-D point cloud models tend to be large and cumbersome, making them hard to deploy on edged devices due to their high memory requirements and nonreal-time latency. There has been a lack of research on how to compress 3-D point cloud models into lightweight models. In this article, we propose a method called T3DNet (tiny 3-D network with augmentation and distillation) to address this issue. We find that the tiny model after network augmentation is much easier for a teacher to distill. Instead of gradually reducing the parameters through techniques, such as pruning or quantization, we predefine a tiny model and improve its performance through auxiliary supervision from augmented networks and the original model. We evaluate our method on several public datasets, including ModelNet40, ShapeNet, and ScanObjectNN. Our method can achieve high compression rates without significant accuracy sacrifice, achieving state-of-the-art performances on three datasets against existing methods. Amazingly, our T3DNet is 58× smaller and 54× faster than the original model yet with only 1.4% accuracy descent on the ModelNet40 dataset. Our code is available at https://github.com/Zhiyuan002/T3DNet. |
author2 |
School of Electrical and Electronic Engineering |
author_facet |
School of Electrical and Electronic Engineering Yang, Zhiyuan Zhou, Yunjiao Xie, Lihua Yang, Jianfei |
format |
Article |
author |
Yang, Zhiyuan Zhou, Yunjiao Xie, Lihua Yang, Jianfei |
author_sort |
Yang, Zhiyuan |
title |
T3DNet: compressing point cloud models for lightweight 3-D recognition |
title_short |
T3DNet: compressing point cloud models for lightweight 3-D recognition |
title_full |
T3DNet: compressing point cloud models for lightweight 3-D recognition |
title_fullStr |
T3DNet: compressing point cloud models for lightweight 3-D recognition |
title_full_unstemmed |
T3DNet: compressing point cloud models for lightweight 3-D recognition |
title_sort |
t3dnet: compressing point cloud models for lightweight 3-d recognition |
publishDate |
2025 |
url |
https://hdl.handle.net/10356/182680 |
_version_ |
1825619704857755648 |