RGBD salient object detection via deep fusion

Numerous efforts have been made to design various low-level saliency cues for RGBD saliency detection, such as color and depth contrast features as well as background and color compactness priors. However, how these low-level saliency cues interact with each other and how they can be effectively inc...

Full description

Saved in:
Bibliographic Details
Main Authors: QU, Liangqiong, HE, Shengfeng, ZHANG, Jiawei, TIAN, Jiandong, TANG, Yandong, YANG, Qingxiong
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2017
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/7879
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
id sg-smu-ink.sis_research-8882
record_format dspace
spelling sg-smu-ink.sis_research-88822023-06-15T09:00:05Z RGBD salient object detection via deep fusion QU, Liangqiong HE, Shengfeng ZHANG, Jiawei TIAN, Jiandong TANG, Yandong YANG, Qingxiong Numerous efforts have been made to design various low-level saliency cues for RGBD saliency detection, such as color and depth contrast features as well as background and color compactness priors. However, how these low-level saliency cues interact with each other and how they can be effectively incorporated to generate a master saliency map remain challenging problems. In this paper, we design a new convolutional neural network (CNN) to automatically learn the interaction mechanism for RGBD salient object detection. In contrast to existing works, in which raw image pixels are fed directly to the CNN, the proposed method takes advantage of the knowledge obtained in traditional saliency detection by adopting various flexible and interpretable saliency feature vectors as inputs. This guides the CNN to learn a combination of existing features to predict saliency more effectively, which presents a less complex problem than operating on the pixels directly. We then integrate a superpixel-based Laplacian propagation framework with the trained CNN to extract a spatially consistent saliency map by exploiting the intrinsic structure of the input image. Extensive quantitative and qualitative experimental evaluations on three data sets demonstrate that the proposed method consistently outperforms the state-of-the-art methods. 2017-05-01T07:00:00Z text https://ink.library.smu.edu.sg/sis_research/7879 info:doi/10.1109/TIP.2017.2682981 Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University RGBD saliency detection;convolutional neural network;Laplacian propagation Information Security
institution Singapore Management University
building SMU Libraries
continent Asia
country Singapore
Singapore
content_provider SMU Libraries
collection InK@SMU
language English
topic RGBD saliency detection;convolutional neural network;Laplacian propagation
Information Security
spellingShingle RGBD saliency detection;convolutional neural network;Laplacian propagation
Information Security
QU, Liangqiong
HE, Shengfeng
ZHANG, Jiawei
TIAN, Jiandong
TANG, Yandong
YANG, Qingxiong
RGBD salient object detection via deep fusion
description Numerous efforts have been made to design various low-level saliency cues for RGBD saliency detection, such as color and depth contrast features as well as background and color compactness priors. However, how these low-level saliency cues interact with each other and how they can be effectively incorporated to generate a master saliency map remain challenging problems. In this paper, we design a new convolutional neural network (CNN) to automatically learn the interaction mechanism for RGBD salient object detection. In contrast to existing works, in which raw image pixels are fed directly to the CNN, the proposed method takes advantage of the knowledge obtained in traditional saliency detection by adopting various flexible and interpretable saliency feature vectors as inputs. This guides the CNN to learn a combination of existing features to predict saliency more effectively, which presents a less complex problem than operating on the pixels directly. We then integrate a superpixel-based Laplacian propagation framework with the trained CNN to extract a spatially consistent saliency map by exploiting the intrinsic structure of the input image. Extensive quantitative and qualitative experimental evaluations on three data sets demonstrate that the proposed method consistently outperforms the state-of-the-art methods.
format text
author QU, Liangqiong
HE, Shengfeng
ZHANG, Jiawei
TIAN, Jiandong
TANG, Yandong
YANG, Qingxiong
author_facet QU, Liangqiong
HE, Shengfeng
ZHANG, Jiawei
TIAN, Jiandong
TANG, Yandong
YANG, Qingxiong
author_sort QU, Liangqiong
title RGBD salient object detection via deep fusion
title_short RGBD salient object detection via deep fusion
title_full RGBD salient object detection via deep fusion
title_fullStr RGBD salient object detection via deep fusion
title_full_unstemmed RGBD salient object detection via deep fusion
title_sort rgbd salient object detection via deep fusion
publisher Institutional Knowledge at Singapore Management University
publishDate 2017
url https://ink.library.smu.edu.sg/sis_research/7879
_version_ 1770576574916067328