Unifying global-local representations in salient object detection with transformers

The fully convolutional network (FCN) has dominated salient object detection for a long period. However, the locality of CNN requires the model deep enough to have a global receptive field and such a deep model always leads to the loss of local details. In this paper, we introduce a new attention-ba...

Full description

Saved in:
Bibliographic Details
Main Authors: REN, Sucheng, ZHAO, Nanxuan, WEN, Qiang, HAN, Guoqiang, HE, Shengfeng
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2024
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/9769
https://ink.library.smu.edu.sg/context/sis_research/article/10769/viewcontent/2108.02759v2__1_.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
id sg-smu-ink.sis_research-10769
record_format dspace
spelling sg-smu-ink.sis_research-107692024-12-16T02:31:23Z Unifying global-local representations in salient object detection with transformers REN, Sucheng ZHAO, Nanxuan WEN, Qiang HAN, Guoqiang HE, Shengfeng The fully convolutional network (FCN) has dominated salient object detection for a long period. However, the locality of CNN requires the model deep enough to have a global receptive field and such a deep model always leads to the loss of local details. In this paper, we introduce a new attention-based encoder, vision transformer, into salient object detection to ensure the globalization of the representations from shallow to deep layers. With the global view in very shallow layers, the transformer encoder preserves more local representations to recover the spatial details in final saliency maps. Besides, as each layer can capture a global view of its previous layer, adjacent layers can implicitly maximize the representation differences and minimize the redundant features, making every output feature of transformer layers contribute uniquely to the final prediction. To decode features from the transformer, we propose a simple yet effective deeply-transformed decoder. The decoder densely decodes and upsamples the transformer features, generating the final saliency map with less noise injection. Experimental results demonstrate that our method significantly outperforms other FCN-based and transformer-based methods in five benchmarks by a large margin, with an average of 12.17% improvement in terms of Mean Absolute Error (MAE). 2024-08-01T07:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/9769 info:doi/10.1109/TETCI.2024.3380442 https://ink.library.smu.edu.sg/context/sis_research/article/10769/viewcontent/2108.02759v2__1_.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Transformer salient object detection Artificial Intelligence and Robotics Graphics and Human Computer Interfaces
institution Singapore Management University
building SMU Libraries
continent Asia
country Singapore
Singapore
content_provider SMU Libraries
collection InK@SMU
language English
topic Transformer
salient object detection
Artificial Intelligence and Robotics
Graphics and Human Computer Interfaces
spellingShingle Transformer
salient object detection
Artificial Intelligence and Robotics
Graphics and Human Computer Interfaces
REN, Sucheng
ZHAO, Nanxuan
WEN, Qiang
HAN, Guoqiang
HE, Shengfeng
Unifying global-local representations in salient object detection with transformers
description The fully convolutional network (FCN) has dominated salient object detection for a long period. However, the locality of CNN requires the model deep enough to have a global receptive field and such a deep model always leads to the loss of local details. In this paper, we introduce a new attention-based encoder, vision transformer, into salient object detection to ensure the globalization of the representations from shallow to deep layers. With the global view in very shallow layers, the transformer encoder preserves more local representations to recover the spatial details in final saliency maps. Besides, as each layer can capture a global view of its previous layer, adjacent layers can implicitly maximize the representation differences and minimize the redundant features, making every output feature of transformer layers contribute uniquely to the final prediction. To decode features from the transformer, we propose a simple yet effective deeply-transformed decoder. The decoder densely decodes and upsamples the transformer features, generating the final saliency map with less noise injection. Experimental results demonstrate that our method significantly outperforms other FCN-based and transformer-based methods in five benchmarks by a large margin, with an average of 12.17% improvement in terms of Mean Absolute Error (MAE).
format text
author REN, Sucheng
ZHAO, Nanxuan
WEN, Qiang
HAN, Guoqiang
HE, Shengfeng
author_facet REN, Sucheng
ZHAO, Nanxuan
WEN, Qiang
HAN, Guoqiang
HE, Shengfeng
author_sort REN, Sucheng
title Unifying global-local representations in salient object detection with transformers
title_short Unifying global-local representations in salient object detection with transformers
title_full Unifying global-local representations in salient object detection with transformers
title_fullStr Unifying global-local representations in salient object detection with transformers
title_full_unstemmed Unifying global-local representations in salient object detection with transformers
title_sort unifying global-local representations in salient object detection with transformers
publisher Institutional Knowledge at Singapore Management University
publishDate 2024
url https://ink.library.smu.edu.sg/sis_research/9769
https://ink.library.smu.edu.sg/context/sis_research/article/10769/viewcontent/2108.02759v2__1_.pdf
_version_ 1819113133545881600