Invariant training 2D-3D joint hard samples for few-shot point cloud recognition
We tackle the data scarcity challenge in few-shot point cloud recognition of 3D objects by using a joint prediction from a conventional 3D model and a well-pretrained 2D model. Surprisingly, such an ensemble, though seems trivial, has hardly been shown effective in recent 2D-3D models. We find out t...
Saved in:
Main Authors: | , , , , , |
---|---|
Format: | text |
Language: | English |
Published: |
Institutional Knowledge at Singapore Management University
2023
|
Subjects: | |
Online Access: | https://ink.library.smu.edu.sg/sis_research/8389 https://ink.library.smu.edu.sg/context/sis_research/article/9392/viewcontent/Yi_Invariant_Training_2D_3D_Joint_Hard_Samples_for_Few_Shot_Point_Cloud_ICCV_2023_paper__1_.pdf |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Singapore Management University |
Language: | English |
id |
sg-smu-ink.sis_research-9392 |
---|---|
record_format |
dspace |
spelling |
sg-smu-ink.sis_research-93922024-01-09T03:56:33Z Invariant training 2D-3D joint hard samples for few-shot point cloud recognition YI, Xuanyu DENG, Jiajun SUN, Qianru HUA, Xian-Sheng LIM, Joo-Hwee ZHANG, Hanwang We tackle the data scarcity challenge in few-shot point cloud recognition of 3D objects by using a joint prediction from a conventional 3D model and a well-pretrained 2D model. Surprisingly, such an ensemble, though seems trivial, has hardly been shown effective in recent 2D-3D models. We find out the crux is the less effective training for the “joint hard samples”, which have high confidence prediction on different wrong labels, implying that the 2D and 3D models do not collaborate well. To this end, our proposed invariant training strategy, called INVJOINT, does not only emphasize the training more on the hard samples, but also seeks the invariance between the conflicting 2D and 3D ambiguous predictions. INVJOINT can learn more collaborative 2D and 3D representations for better ensemble. Extensive experiments on 3D shape classification with widely-adopted ModelNet10/40, ScanObjectNN and Toys4K, and shape retrieval with ShapeNet-Core validate the superiority of our INVJOINT. 2023-10-01T07:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/8389 https://ink.library.smu.edu.sg/context/sis_research/article/9392/viewcontent/Yi_Invariant_Training_2D_3D_Joint_Hard_Samples_for_Few_Shot_Point_Cloud_ICCV_2023_paper__1_.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Databases and Information Systems |
institution |
Singapore Management University |
building |
SMU Libraries |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
SMU Libraries |
collection |
InK@SMU |
language |
English |
topic |
Databases and Information Systems |
spellingShingle |
Databases and Information Systems YI, Xuanyu DENG, Jiajun SUN, Qianru HUA, Xian-Sheng LIM, Joo-Hwee ZHANG, Hanwang Invariant training 2D-3D joint hard samples for few-shot point cloud recognition |
description |
We tackle the data scarcity challenge in few-shot point cloud recognition of 3D objects by using a joint prediction from a conventional 3D model and a well-pretrained 2D model. Surprisingly, such an ensemble, though seems trivial, has hardly been shown effective in recent 2D-3D models. We find out the crux is the less effective training for the “joint hard samples”, which have high confidence prediction on different wrong labels, implying that the 2D and 3D models do not collaborate well. To this end, our proposed invariant training strategy, called INVJOINT, does not only emphasize the training more on the hard samples, but also seeks the invariance between the conflicting 2D and 3D ambiguous predictions. INVJOINT can learn more collaborative 2D and 3D representations for better ensemble. Extensive experiments on 3D shape classification with widely-adopted ModelNet10/40, ScanObjectNN and Toys4K, and shape retrieval with ShapeNet-Core validate the superiority of our INVJOINT. |
format |
text |
author |
YI, Xuanyu DENG, Jiajun SUN, Qianru HUA, Xian-Sheng LIM, Joo-Hwee ZHANG, Hanwang |
author_facet |
YI, Xuanyu DENG, Jiajun SUN, Qianru HUA, Xian-Sheng LIM, Joo-Hwee ZHANG, Hanwang |
author_sort |
YI, Xuanyu |
title |
Invariant training 2D-3D joint hard samples for few-shot point cloud recognition |
title_short |
Invariant training 2D-3D joint hard samples for few-shot point cloud recognition |
title_full |
Invariant training 2D-3D joint hard samples for few-shot point cloud recognition |
title_fullStr |
Invariant training 2D-3D joint hard samples for few-shot point cloud recognition |
title_full_unstemmed |
Invariant training 2D-3D joint hard samples for few-shot point cloud recognition |
title_sort |
invariant training 2d-3d joint hard samples for few-shot point cloud recognition |
publisher |
Institutional Knowledge at Singapore Management University |
publishDate |
2023 |
url |
https://ink.library.smu.edu.sg/sis_research/8389 https://ink.library.smu.edu.sg/context/sis_research/article/9392/viewcontent/Yi_Invariant_Training_2D_3D_Joint_Hard_Samples_for_Few_Shot_Point_Cloud_ICCV_2023_paper__1_.pdf |
_version_ |
1787590767283273728 |