Click-through-based subspace learning for image search

One of the fundamental problems in image search is to rank image documents according to a given textual query. We address two limitations of the existing image search engines in this paper. First, there is no straightforward way of comparing textual keywords with visual image content. Image search e...

Full description

Saved in:
Bibliographic Details
Main Authors: PAN, Yingwei, YAO, Ting, TIAN, Xinmei, LI, Houqiang, NGO, Chong-wah
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2014
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/6529
https://ink.library.smu.edu.sg/context/sis_research/article/7532/viewcontent/2647868.2656404.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
id sg-smu-ink.sis_research-7532
record_format dspace
spelling sg-smu-ink.sis_research-75322022-01-10T03:48:30Z Click-through-based subspace learning for image search PAN, Yingwei YAO, Ting TIAN, Xinmei LI, Houqiang NGO, Chong-wah One of the fundamental problems in image search is to rank image documents according to a given textual query. We address two limitations of the existing image search engines in this paper. First, there is no straightforward way of comparing textual keywords with visual image content. Image search engines therefore highly depend on the surrounding texts, which are often noisy or too few to accurately describe the image content. Second, ranking functions are trained on query-image pairs labeled by human labelers, making the annotation intellectually expensive and thus cannot be scaled up. We demonstrate that the above two fundamental challenges can be mitigated by jointly exploring the subspace learning and the use of click-through data. The former aims to create a latent subspace with the ability in comparing information from the original incomparable views (i.e., textual and visual views), while the latter explores the largely available and freely accessible click-through data (i.e., “crowdsourced” human intelligence) for understanding query. Specifically, we investigate a series of click-throughbased subspace learning techniques (CSL) for image search. We conduct experiments on MSR-Bing Grand Challenge and the final evaluation performance achieves 퐷퐶퐺@25 = 0.47225. Moreover, the feature dimension is significantly reduced by several orders of magnitude (e.g., from thousands to tens). 2014-11-01T07:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/6529 info:doi/10.1145/2647868.2656404 https://ink.library.smu.edu.sg/context/sis_research/article/7532/viewcontent/2647868.2656404.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Click-through data DNN image representation Image search Subspace learning Databases and Information Systems Data Storage Systems Graphics and Human Computer Interfaces
institution Singapore Management University
building SMU Libraries
continent Asia
country Singapore
Singapore
content_provider SMU Libraries
collection InK@SMU
language English
topic Click-through data
DNN image representation
Image search
Subspace learning
Databases and Information Systems
Data Storage Systems
Graphics and Human Computer Interfaces
spellingShingle Click-through data
DNN image representation
Image search
Subspace learning
Databases and Information Systems
Data Storage Systems
Graphics and Human Computer Interfaces
PAN, Yingwei
YAO, Ting
TIAN, Xinmei
LI, Houqiang
NGO, Chong-wah
Click-through-based subspace learning for image search
description One of the fundamental problems in image search is to rank image documents according to a given textual query. We address two limitations of the existing image search engines in this paper. First, there is no straightforward way of comparing textual keywords with visual image content. Image search engines therefore highly depend on the surrounding texts, which are often noisy or too few to accurately describe the image content. Second, ranking functions are trained on query-image pairs labeled by human labelers, making the annotation intellectually expensive and thus cannot be scaled up. We demonstrate that the above two fundamental challenges can be mitigated by jointly exploring the subspace learning and the use of click-through data. The former aims to create a latent subspace with the ability in comparing information from the original incomparable views (i.e., textual and visual views), while the latter explores the largely available and freely accessible click-through data (i.e., “crowdsourced” human intelligence) for understanding query. Specifically, we investigate a series of click-throughbased subspace learning techniques (CSL) for image search. We conduct experiments on MSR-Bing Grand Challenge and the final evaluation performance achieves 퐷퐶퐺@25 = 0.47225. Moreover, the feature dimension is significantly reduced by several orders of magnitude (e.g., from thousands to tens).
format text
author PAN, Yingwei
YAO, Ting
TIAN, Xinmei
LI, Houqiang
NGO, Chong-wah
author_facet PAN, Yingwei
YAO, Ting
TIAN, Xinmei
LI, Houqiang
NGO, Chong-wah
author_sort PAN, Yingwei
title Click-through-based subspace learning for image search
title_short Click-through-based subspace learning for image search
title_full Click-through-based subspace learning for image search
title_fullStr Click-through-based subspace learning for image search
title_full_unstemmed Click-through-based subspace learning for image search
title_sort click-through-based subspace learning for image search
publisher Institutional Knowledge at Singapore Management University
publishDate 2014
url https://ink.library.smu.edu.sg/sis_research/6529
https://ink.library.smu.edu.sg/context/sis_research/article/7532/viewcontent/2647868.2656404.pdf
_version_ 1770575982299709440