Self-regulation for semantic segmentation

In this paper, we seek reasons for the two major failure cases in Semantic Segmentation (SS): 1) missing small objects or minor object parts, and 2) mislabeling minor parts of large objects as wrong classes. We have an interesting finding that Failure-1 is due to the underuse of detailed features an...

Full description

Saved in:
Bibliographic Details
Main Authors: ZHANG, Dong, ZHANG, Hanwang, TANG, Jinhui, HUA, Xian-Sheng, SUN, Qianru
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2021
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/6230
https://ink.library.smu.edu.sg/context/sis_research/article/7233/viewcontent/Zhang_Self_Regulation_for_Semantic_Segmentation_ICCV_2021_paper.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
id sg-smu-ink.sis_research-7233
record_format dspace
spelling sg-smu-ink.sis_research-72332021-10-22T05:57:13Z Self-regulation for semantic segmentation ZHANG, Dong ZHANG, Hanwang TANG, Jinhui HUA, Xian-Sheng SUN, Qianru In this paper, we seek reasons for the two major failure cases in Semantic Segmentation (SS): 1) missing small objects or minor object parts, and 2) mislabeling minor parts of large objects as wrong classes. We have an interesting finding that Failure-1 is due to the underuse of detailed features and Failure-2 is due to the underuse of visual contexts. To help the model learn a better trade-off, we introduce several Self-Regulation (SR) losses for training SS neural networks. By “self”, we mean that the losses are from the model per se without using any additional data or supervision. By applying the SR losses, the deep layer features are regulated by the shallow ones to preserve more details; meanwhile, shallow layer classification logits are regulated by the deep ones to capture more semantics. We conduct extensive experiments on both weakly and fully supervised SS tasks, and the results show that our approach consistently surpasses the baselines. We also validate that SR losses are easy to implement in various state-of-the-art SS models, e.g., SPGNet [7] and OCRNet [62], incurring little computational overhead during training and none for testing. 2021-10-01T07:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/6230 https://ink.library.smu.edu.sg/context/sis_research/article/7233/viewcontent/Zhang_Self_Regulation_for_Semantic_Segmentation_ICCV_2021_paper.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Graphics and Human Computer Interfaces
institution Singapore Management University
building SMU Libraries
continent Asia
country Singapore
Singapore
content_provider SMU Libraries
collection InK@SMU
language English
topic Graphics and Human Computer Interfaces
spellingShingle Graphics and Human Computer Interfaces
ZHANG, Dong
ZHANG, Hanwang
TANG, Jinhui
HUA, Xian-Sheng
SUN, Qianru
Self-regulation for semantic segmentation
description In this paper, we seek reasons for the two major failure cases in Semantic Segmentation (SS): 1) missing small objects or minor object parts, and 2) mislabeling minor parts of large objects as wrong classes. We have an interesting finding that Failure-1 is due to the underuse of detailed features and Failure-2 is due to the underuse of visual contexts. To help the model learn a better trade-off, we introduce several Self-Regulation (SR) losses for training SS neural networks. By “self”, we mean that the losses are from the model per se without using any additional data or supervision. By applying the SR losses, the deep layer features are regulated by the shallow ones to preserve more details; meanwhile, shallow layer classification logits are regulated by the deep ones to capture more semantics. We conduct extensive experiments on both weakly and fully supervised SS tasks, and the results show that our approach consistently surpasses the baselines. We also validate that SR losses are easy to implement in various state-of-the-art SS models, e.g., SPGNet [7] and OCRNet [62], incurring little computational overhead during training and none for testing.
format text
author ZHANG, Dong
ZHANG, Hanwang
TANG, Jinhui
HUA, Xian-Sheng
SUN, Qianru
author_facet ZHANG, Dong
ZHANG, Hanwang
TANG, Jinhui
HUA, Xian-Sheng
SUN, Qianru
author_sort ZHANG, Dong
title Self-regulation for semantic segmentation
title_short Self-regulation for semantic segmentation
title_full Self-regulation for semantic segmentation
title_fullStr Self-regulation for semantic segmentation
title_full_unstemmed Self-regulation for semantic segmentation
title_sort self-regulation for semantic segmentation
publisher Institutional Knowledge at Singapore Management University
publishDate 2021
url https://ink.library.smu.edu.sg/sis_research/6230
https://ink.library.smu.edu.sg/context/sis_research/article/7233/viewcontent/Zhang_Self_Regulation_for_Semantic_Segmentation_ICCV_2021_paper.pdf
_version_ 1770575895899144192