Boundary-aware RGBD salient object detection with cross-modal feature sampling

Mobile devices usually mount a depth sensor to resolve ill-posed problems, like salient object detection on cluttered background. The main barrier of exploring RGBD data is to handle the information from two different modalities. To cope with this problem, in this paper, we propose a boundary-aware...

Full description

Saved in:
Bibliographic Details
Main Authors: NIU, Yuzhen, LONG, Guanchao, LIU, Wenxi, GUO, Wenzhong, HE, Shengfeng
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2020
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/7847
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
id sg-smu-ink.sis_research-8850
record_format dspace
spelling sg-smu-ink.sis_research-88502023-06-15T09:00:05Z Boundary-aware RGBD salient object detection with cross-modal feature sampling NIU, Yuzhen LONG, Guanchao LIU, Wenxi GUO, Wenzhong HE, Shengfeng Mobile devices usually mount a depth sensor to resolve ill-posed problems, like salient object detection on cluttered background. The main barrier of exploring RGBD data is to handle the information from two different modalities. To cope with this problem, in this paper, we propose a boundary-aware cross-modal fusion network for RGBD salient object detection. In particular, to enhance the fusion of color and depth features, we present a cross-modal feature sampling module to balance the contribution of the RGB and depth features based on the statistics of their channel values. In addition, in our multi-scale dense fusion network architecture, we not only incorporate edge-sensitive losses to preserve the boundary of the detected salient region, but also refine its structure by merging the estimated saliency maps of different scales. We accomplish the multi-scale saliency map merging using two alternative methods which produce refined saliency maps via per-pixel weighted combination and an encoder-decoder network. Extensive experimental evaluations demonstrate that our proposed framework can achieve the state-of-the-art performance on several public RGBD-based datasets. 2020-01-01T08:00:00Z text https://ink.library.smu.edu.sg/sis_research/7847 info:doi/10.1109/TIP.2020.3028170 Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Salient object detection cross-modal boundary-aware estimation Information Security
institution Singapore Management University
building SMU Libraries
continent Asia
country Singapore
Singapore
content_provider SMU Libraries
collection InK@SMU
language English
topic Salient object detection
cross-modal
boundary-aware estimation
Information Security
spellingShingle Salient object detection
cross-modal
boundary-aware estimation
Information Security
NIU, Yuzhen
LONG, Guanchao
LIU, Wenxi
GUO, Wenzhong
HE, Shengfeng
Boundary-aware RGBD salient object detection with cross-modal feature sampling
description Mobile devices usually mount a depth sensor to resolve ill-posed problems, like salient object detection on cluttered background. The main barrier of exploring RGBD data is to handle the information from two different modalities. To cope with this problem, in this paper, we propose a boundary-aware cross-modal fusion network for RGBD salient object detection. In particular, to enhance the fusion of color and depth features, we present a cross-modal feature sampling module to balance the contribution of the RGB and depth features based on the statistics of their channel values. In addition, in our multi-scale dense fusion network architecture, we not only incorporate edge-sensitive losses to preserve the boundary of the detected salient region, but also refine its structure by merging the estimated saliency maps of different scales. We accomplish the multi-scale saliency map merging using two alternative methods which produce refined saliency maps via per-pixel weighted combination and an encoder-decoder network. Extensive experimental evaluations demonstrate that our proposed framework can achieve the state-of-the-art performance on several public RGBD-based datasets.
format text
author NIU, Yuzhen
LONG, Guanchao
LIU, Wenxi
GUO, Wenzhong
HE, Shengfeng
author_facet NIU, Yuzhen
LONG, Guanchao
LIU, Wenxi
GUO, Wenzhong
HE, Shengfeng
author_sort NIU, Yuzhen
title Boundary-aware RGBD salient object detection with cross-modal feature sampling
title_short Boundary-aware RGBD salient object detection with cross-modal feature sampling
title_full Boundary-aware RGBD salient object detection with cross-modal feature sampling
title_fullStr Boundary-aware RGBD salient object detection with cross-modal feature sampling
title_full_unstemmed Boundary-aware RGBD salient object detection with cross-modal feature sampling
title_sort boundary-aware rgbd salient object detection with cross-modal feature sampling
publisher Institutional Knowledge at Singapore Management University
publishDate 2020
url https://ink.library.smu.edu.sg/sis_research/7847
_version_ 1770576555531042816