TSG-Seg: temporal-selective guidance for semi-supervised semantic segmentation of 3D LiDAR point clouds
LiDAR-based semantic scene understanding holds a pivotal role in various applications, including remote sensing and autonomous driving. However, the majority of LiDAR segmentation models rely on extensive and densely annotated training datasets, which is extremely laborious to annotate and hinder th...
Saved in:
Main Authors: | , , |
---|---|
Other Authors: | |
Format: | Article |
Language: | English |
Published: |
2024
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/180795 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
id |
sg-ntu-dr.10356-180795 |
---|---|
record_format |
dspace |
spelling |
sg-ntu-dr.10356-1807952024-10-28T01:22:09Z TSG-Seg: temporal-selective guidance for semi-supervised semantic segmentation of 3D LiDAR point clouds Xuan, Weihao Qi, Heli Xiao, Aoran. College of Computing and Data Science Computer and Information Science 3D point cloud LiDAR LiDAR-based semantic scene understanding holds a pivotal role in various applications, including remote sensing and autonomous driving. However, the majority of LiDAR segmentation models rely on extensive and densely annotated training datasets, which is extremely laborious to annotate and hinder the widespread adoption of LiDAR systems. Semi-supervised learning (SSL) offers a promising solution by leveraging only a small amount of labeled data and a larger set of unlabeled data, aiming to train robust models with desired accuracy comparable to fully supervised learning. A typical pipeline of SSL involves the initial use of labeled data to train segmentation models, followed by the utilization of predictions generated from unlabeled data, which are used as pseudo-ground truths for model retraining. However, the scarcity of labeled data limits the capture of comprehensive representations, leading to the constraints of these pseudo-ground truths in reliability. We observed that objects captured by LiDAR sensors from varying perspectives showcase diverse data characteristics due to occlusions and distance variation, and LiDAR segmentation models trained with limited labels prove susceptible to these viewpoint disparities, resulting in inaccurately predicted pseudo-ground truths across viewpoints and the accumulation of retraining errors. To address this problem, we introduce the Temporal-Selective Guided Learning (TSG-Seg) framework. TSG-Seg explores temporal cues inherent in LiDAR frames to bridge the cross-viewpoint representations, fostering consistent and robust segmentation predictions across differing viewpoints. Specifically, we first establish point-wise correspondences across LiDAR frames with different time stamps through point registration. Subsequently, reliable point predictions are selected and propagated to points from adjacent views to the current view, serving as strong and refined supervision signals for subsequent model re-training to achieve better segmentation. We conducted extensive experiments on various SSL labeling setups across multiple public datasets, including SemanticKITTI and SemanticPOSS, to evaluate the effectiveness of TSG-Seg. Our results demonstrate its competitive performance and robustness in diverse scenarios, from data-limited to data-abundant settings. Notably, TSG-Seg achieves a mIoU of 48.6% using only 5% of and 62.3% with 40% of labeled data in the sequential split on SemanticKITTI. This consistently outperforms state-of-the-art segmentation methods, including GPC and LaserMix. These findings underscore TSG-Seg's superior capability and potential for real-world applications. The project can be found at https://tsgseg.github.io. 2024-10-28T01:22:09Z 2024-10-28T01:22:09Z 2024 Journal Article Xuan, W., Qi, H. & Xiao, A. (2024). TSG-Seg: temporal-selective guidance for semi-supervised semantic segmentation of 3D LiDAR point clouds. ISPRS Journal of Photogrammetry and Remote Sensing, 216, 217-228. https://dx.doi.org/10.1016/j.isprsjprs.2024.07.020 0924-2716 https://hdl.handle.net/10356/180795 10.1016/j.isprsjprs.2024.07.020 2-s2.0-85200633791 216 217 228 en ISPRS Journal of Photogrammetry and Remote Sensing © 2024 International Society for Photogrammetry and Remote Sensing, Inc. (ISPRS). Published by Elsevier B.V. All rights are reserved, including those for text and data mining, AI training, and similar technologies. |
institution |
Nanyang Technological University |
building |
NTU Library |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
NTU Library |
collection |
DR-NTU |
language |
English |
topic |
Computer and Information Science 3D point cloud LiDAR |
spellingShingle |
Computer and Information Science 3D point cloud LiDAR Xuan, Weihao Qi, Heli Xiao, Aoran. TSG-Seg: temporal-selective guidance for semi-supervised semantic segmentation of 3D LiDAR point clouds |
description |
LiDAR-based semantic scene understanding holds a pivotal role in various applications, including remote sensing and autonomous driving. However, the majority of LiDAR segmentation models rely on extensive and densely annotated training datasets, which is extremely laborious to annotate and hinder the widespread adoption of LiDAR systems. Semi-supervised learning (SSL) offers a promising solution by leveraging only a small amount of labeled data and a larger set of unlabeled data, aiming to train robust models with desired accuracy comparable to fully supervised learning. A typical pipeline of SSL involves the initial use of labeled data to train segmentation models, followed by the utilization of predictions generated from unlabeled data, which are used as pseudo-ground truths for model retraining. However, the scarcity of labeled data limits the capture of comprehensive representations, leading to the constraints of these pseudo-ground truths in reliability. We observed that objects captured by LiDAR sensors from varying perspectives showcase diverse data characteristics due to occlusions and distance variation, and LiDAR segmentation models trained with limited labels prove susceptible to these viewpoint disparities, resulting in inaccurately predicted pseudo-ground truths across viewpoints and the accumulation of retraining errors. To address this problem, we introduce the Temporal-Selective Guided Learning (TSG-Seg) framework. TSG-Seg explores temporal cues inherent in LiDAR frames to bridge the cross-viewpoint representations, fostering consistent and robust segmentation predictions across differing viewpoints. Specifically, we first establish point-wise correspondences across LiDAR frames with different time stamps through point registration. Subsequently, reliable point predictions are selected and propagated to points from adjacent views to the current view, serving as strong and refined supervision signals for subsequent model re-training to achieve better segmentation. We conducted extensive experiments on various SSL labeling setups across multiple public datasets, including SemanticKITTI and SemanticPOSS, to evaluate the effectiveness of TSG-Seg. Our results demonstrate its competitive performance and robustness in diverse scenarios, from data-limited to data-abundant settings. Notably, TSG-Seg achieves a mIoU of 48.6% using only 5% of and 62.3% with 40% of labeled data in the sequential split on SemanticKITTI. This consistently outperforms state-of-the-art segmentation methods, including GPC and LaserMix. These findings underscore TSG-Seg's superior capability and potential for real-world applications. The project can be found at https://tsgseg.github.io. |
author2 |
College of Computing and Data Science |
author_facet |
College of Computing and Data Science Xuan, Weihao Qi, Heli Xiao, Aoran. |
format |
Article |
author |
Xuan, Weihao Qi, Heli Xiao, Aoran. |
author_sort |
Xuan, Weihao |
title |
TSG-Seg: temporal-selective guidance for semi-supervised semantic segmentation of 3D LiDAR point clouds |
title_short |
TSG-Seg: temporal-selective guidance for semi-supervised semantic segmentation of 3D LiDAR point clouds |
title_full |
TSG-Seg: temporal-selective guidance for semi-supervised semantic segmentation of 3D LiDAR point clouds |
title_fullStr |
TSG-Seg: temporal-selective guidance for semi-supervised semantic segmentation of 3D LiDAR point clouds |
title_full_unstemmed |
TSG-Seg: temporal-selective guidance for semi-supervised semantic segmentation of 3D LiDAR point clouds |
title_sort |
tsg-seg: temporal-selective guidance for semi-supervised semantic segmentation of 3d lidar point clouds |
publishDate |
2024 |
url |
https://hdl.handle.net/10356/180795 |
_version_ |
1814777796095901696 |