Exemplar-driven top-down saliency detection via deep association
Top-down saliency detection is a knowledge-driven search task. While some previous methods aim to learn this "knowledge" from category-specific data, others transfer existing annotations in a large dataset through appearance matching. In contrast, we propose in this paper a locateby-exempl...
Saved in:
Main Authors: | , , |
---|---|
Format: | text |
Language: | English |
Published: |
Institutional Knowledge at Singapore Management University
2016
|
Subjects: | |
Online Access: | https://ink.library.smu.edu.sg/sis_research/8427 https://ink.library.smu.edu.sg/context/sis_research/article/9430/viewcontent/He_Exemplar_Driven_Top_Down_Saliency_CVPR_2016_paper.pdf |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Singapore Management University |
Language: | English |
id |
sg-smu-ink.sis_research-9430 |
---|---|
record_format |
dspace |
spelling |
sg-smu-ink.sis_research-94302024-01-09T03:29:00Z Exemplar-driven top-down saliency detection via deep association HE, Shengfeng LAU, Rynson W. H. YANG, Qingxiong Top-down saliency detection is a knowledge-driven search task. While some previous methods aim to learn this "knowledge" from category-specific data, others transfer existing annotations in a large dataset through appearance matching. In contrast, we propose in this paper a locateby-exemplar strategy. This approach is challenging, as we only use a few exemplars (up to 4) and the appearances among the query object and the exemplars can be very different. To address it, we design a two-stage deep model to learn the intra-class association between the exemplars and query objects. The first stage is for learning object-to-object association, and the second stage is to learn background discrimination. Extensive experimental evaluations show that the proposed method outperforms different baselines and the category-specific models. In addition, we explore the influence of exemplar properties, in terms of exemplar number and quality. Furthermore, we show that the learned model is a universal model and offers great generalization to unseen objects. 2016-06-01T07:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/8427 info:doi/10.1109/CVPR.2016.617 https://ink.library.smu.edu.sg/context/sis_research/article/9430/viewcontent/He_Exemplar_Driven_Top_Down_Saliency_CVPR_2016_paper.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Computer vision Visualization Feature extraction Network architecture Artificial Intelligence and Robotics Graphics and Human Computer Interfaces Systems Architecture |
institution |
Singapore Management University |
building |
SMU Libraries |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
SMU Libraries |
collection |
InK@SMU |
language |
English |
topic |
Computer vision Visualization Feature extraction Network architecture Artificial Intelligence and Robotics Graphics and Human Computer Interfaces Systems Architecture |
spellingShingle |
Computer vision Visualization Feature extraction Network architecture Artificial Intelligence and Robotics Graphics and Human Computer Interfaces Systems Architecture HE, Shengfeng LAU, Rynson W. H. YANG, Qingxiong Exemplar-driven top-down saliency detection via deep association |
description |
Top-down saliency detection is a knowledge-driven search task. While some previous methods aim to learn this "knowledge" from category-specific data, others transfer existing annotations in a large dataset through appearance matching. In contrast, we propose in this paper a locateby-exemplar strategy. This approach is challenging, as we only use a few exemplars (up to 4) and the appearances among the query object and the exemplars can be very different. To address it, we design a two-stage deep model to learn the intra-class association between the exemplars and query objects. The first stage is for learning object-to-object association, and the second stage is to learn background discrimination. Extensive experimental evaluations show that the proposed method outperforms different baselines and the category-specific models. In addition, we explore the influence of exemplar properties, in terms of exemplar number and quality. Furthermore, we show that the learned model is a universal model and offers great generalization to unseen objects. |
format |
text |
author |
HE, Shengfeng LAU, Rynson W. H. YANG, Qingxiong |
author_facet |
HE, Shengfeng LAU, Rynson W. H. YANG, Qingxiong |
author_sort |
HE, Shengfeng |
title |
Exemplar-driven top-down saliency detection via deep association |
title_short |
Exemplar-driven top-down saliency detection via deep association |
title_full |
Exemplar-driven top-down saliency detection via deep association |
title_fullStr |
Exemplar-driven top-down saliency detection via deep association |
title_full_unstemmed |
Exemplar-driven top-down saliency detection via deep association |
title_sort |
exemplar-driven top-down saliency detection via deep association |
publisher |
Institutional Knowledge at Singapore Management University |
publishDate |
2016 |
url |
https://ink.library.smu.edu.sg/sis_research/8427 https://ink.library.smu.edu.sg/context/sis_research/article/9430/viewcontent/He_Exemplar_Driven_Top_Down_Saliency_CVPR_2016_paper.pdf |
_version_ |
1787590773729918976 |