Semantic scene completion with cleaner self

Semantic Scene Completion (SSC) transforms an image of single-view depth and/or RGB 2D pixels into 3D voxels, each of whose semantic labels are predicted. SSC is a well-known ill-posed problem as the prediction model has to “imagine” what is behind the visible surface, which is usually represented b...

Full description

Saved in:
Bibliographic Details
Main Authors: WANG, Fengyun, ZHANG, Dong, ZHANG, Hanwang, TANG, Jinhui, SUN, Qianru
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2023
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/8100
https://ink.library.smu.edu.sg/context/sis_research/article/9103/viewcontent/Wang_Semantic_Scene_Completion_With_Cleaner_Self_CVPR_2023_paper.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
id sg-smu-ink.sis_research-9103
record_format dspace
spelling sg-smu-ink.sis_research-91032023-09-07T07:21:00Z Semantic scene completion with cleaner self WANG, Fengyun ZHANG, Dong ZHANG, Hanwang TANG, Jinhui SUN, Qianru Semantic Scene Completion (SSC) transforms an image of single-view depth and/or RGB 2D pixels into 3D voxels, each of whose semantic labels are predicted. SSC is a well-known ill-posed problem as the prediction model has to “imagine” what is behind the visible surface, which is usually represented by Truncated Signed Distance Function (TSDF). Due to the sensory imperfection of the depth camera, most existing methods based on the noisy TSDF estimated from depth values suffer from 1) incomplete volumetric predictions and 2) confused semantic labels. To this end, we use the ground-truth 3D voxels to generate a perfect visible surface, called TSDF-CAD, and then train a “cleaner” SSC model. As the model is noise-free, it is expected to focus more on the “imagination” of unseen voxels. Then, we propose to distill the intermediate “cleaner” knowledge into another model with noisy TSDF input. In particular, we use the 3D occupancy feature and the semantic relations of the “cleaner self” to supervise the counterparts of the “noisy self” to respectively address the above two incorrect predictions. Experimental results validate that the proposed method improves the noisy counterparts with 3.1% IoU and 2.2% mIoU for measuring scene completion and SSC separately, and also achieves a new state-of-the-art performance on the popular NYU dataset. 2023-06-01T07:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/8100 info:doi/10.48550/arXiv.2303.09977 https://ink.library.smu.edu.sg/context/sis_research/article/9103/viewcontent/Wang_Semantic_Scene_Completion_With_Cleaner_Self_CVPR_2023_paper.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Databases and Information Systems Graphics and Human Computer Interfaces
institution Singapore Management University
building SMU Libraries
continent Asia
country Singapore
Singapore
content_provider SMU Libraries
collection InK@SMU
language English
topic Databases and Information Systems
Graphics and Human Computer Interfaces
spellingShingle Databases and Information Systems
Graphics and Human Computer Interfaces
WANG, Fengyun
ZHANG, Dong
ZHANG, Hanwang
TANG, Jinhui
SUN, Qianru
Semantic scene completion with cleaner self
description Semantic Scene Completion (SSC) transforms an image of single-view depth and/or RGB 2D pixels into 3D voxels, each of whose semantic labels are predicted. SSC is a well-known ill-posed problem as the prediction model has to “imagine” what is behind the visible surface, which is usually represented by Truncated Signed Distance Function (TSDF). Due to the sensory imperfection of the depth camera, most existing methods based on the noisy TSDF estimated from depth values suffer from 1) incomplete volumetric predictions and 2) confused semantic labels. To this end, we use the ground-truth 3D voxels to generate a perfect visible surface, called TSDF-CAD, and then train a “cleaner” SSC model. As the model is noise-free, it is expected to focus more on the “imagination” of unseen voxels. Then, we propose to distill the intermediate “cleaner” knowledge into another model with noisy TSDF input. In particular, we use the 3D occupancy feature and the semantic relations of the “cleaner self” to supervise the counterparts of the “noisy self” to respectively address the above two incorrect predictions. Experimental results validate that the proposed method improves the noisy counterparts with 3.1% IoU and 2.2% mIoU for measuring scene completion and SSC separately, and also achieves a new state-of-the-art performance on the popular NYU dataset.
format text
author WANG, Fengyun
ZHANG, Dong
ZHANG, Hanwang
TANG, Jinhui
SUN, Qianru
author_facet WANG, Fengyun
ZHANG, Dong
ZHANG, Hanwang
TANG, Jinhui
SUN, Qianru
author_sort WANG, Fengyun
title Semantic scene completion with cleaner self
title_short Semantic scene completion with cleaner self
title_full Semantic scene completion with cleaner self
title_fullStr Semantic scene completion with cleaner self
title_full_unstemmed Semantic scene completion with cleaner self
title_sort semantic scene completion with cleaner self
publisher Institutional Knowledge at Singapore Management University
publishDate 2023
url https://ink.library.smu.edu.sg/sis_research/8100
https://ink.library.smu.edu.sg/context/sis_research/article/9103/viewcontent/Wang_Semantic_Scene_Completion_With_Cleaner_Self_CVPR_2023_paper.pdf
_version_ 1779157154398208000