Visual-textual joint relevance learning for tag-based social image search
With the popularity of social media websites, extensive research efforts have been dedicated to tag-based social image search. Both visual information and tags have been investigated in the research field. However, most existing methods use tags and visual characteristics either separately or sequen...
Saved in:
Main Authors: | , , , , , |
---|---|
Format: | text |
Language: | English |
Published: |
Institutional Knowledge at Singapore Management University
2013
|
Subjects: | |
Online Access: | https://ink.library.smu.edu.sg/sis_research/1511 https://ink.library.smu.edu.sg/context/sis_research/article/2510/viewcontent/VisualTextualJointRelevanceLearningTagBasedSocialImage_2013.pdf |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Singapore Management University |
Language: | English |
id |
sg-smu-ink.sis_research-2510 |
---|---|
record_format |
dspace |
spelling |
sg-smu-ink.sis_research-25102017-02-22T09:39:06Z Visual-textual joint relevance learning for tag-based social image search GAO, Yue WANG, Meng ZHA, Zheng-Jun SHEN, Jialie LI, Xuelong WU, Xindong With the popularity of social media websites, extensive research efforts have been dedicated to tag-based social image search. Both visual information and tags have been investigated in the research field. However, most existing methods use tags and visual characteristics either separately or sequentially in order to estimate the relevance of images. In this paper, we propose an approach that simultaneously utilizes both visual and textual information to estimate the relevance of user tagged images. The relevance estimation is determined with a hypergraph learning approach. In this method, a social image hypergraph is constructed, where vertices represent images and hyperedges represent visual or textual terms. Learning is achieved with use of a set of pseudo-positive images, where the weights of hyperedges are updated throughout the learning process. In this way, the impact of different tags and visual words can be automatically modulated. Finally, comparative results of the experiments conducted on a dataset including 370+ images are presented, which demonstrate the effectiveness of the proposed approach. 2013-01-01T08:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/1511 info:doi/10.1109/TIP.2012.2202676 https://ink.library.smu.edu.sg/context/sis_research/article/2510/viewcontent/VisualTextualJointRelevanceLearningTagBasedSocialImage_2013.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Hypergraph Learning Social image search Tag Visual-textual Databases and Information Systems |
institution |
Singapore Management University |
building |
SMU Libraries |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
SMU Libraries |
collection |
InK@SMU |
language |
English |
topic |
Hypergraph Learning Social image search Tag Visual-textual Databases and Information Systems |
spellingShingle |
Hypergraph Learning Social image search Tag Visual-textual Databases and Information Systems GAO, Yue WANG, Meng ZHA, Zheng-Jun SHEN, Jialie LI, Xuelong WU, Xindong Visual-textual joint relevance learning for tag-based social image search |
description |
With the popularity of social media websites, extensive research efforts have been dedicated to tag-based social image search. Both visual information and tags have been investigated in the research field. However, most existing methods use tags and visual characteristics either separately or sequentially in order to estimate the relevance of images. In this paper, we propose an approach that simultaneously utilizes both visual and textual information to estimate the relevance of user tagged images. The relevance estimation is determined with a hypergraph learning approach. In this method, a social image hypergraph is constructed, where vertices represent images and hyperedges represent visual or textual terms. Learning is achieved with use of a set of pseudo-positive images, where the weights of hyperedges are updated throughout the learning process. In this way, the impact of different tags and visual words can be automatically modulated. Finally, comparative results of the experiments conducted on a dataset including 370+ images are presented, which demonstrate the effectiveness of the proposed approach. |
format |
text |
author |
GAO, Yue WANG, Meng ZHA, Zheng-Jun SHEN, Jialie LI, Xuelong WU, Xindong |
author_facet |
GAO, Yue WANG, Meng ZHA, Zheng-Jun SHEN, Jialie LI, Xuelong WU, Xindong |
author_sort |
GAO, Yue |
title |
Visual-textual joint relevance learning for tag-based social image search |
title_short |
Visual-textual joint relevance learning for tag-based social image search |
title_full |
Visual-textual joint relevance learning for tag-based social image search |
title_fullStr |
Visual-textual joint relevance learning for tag-based social image search |
title_full_unstemmed |
Visual-textual joint relevance learning for tag-based social image search |
title_sort |
visual-textual joint relevance learning for tag-based social image search |
publisher |
Institutional Knowledge at Singapore Management University |
publishDate |
2013 |
url |
https://ink.library.smu.edu.sg/sis_research/1511 https://ink.library.smu.edu.sg/context/sis_research/article/2510/viewcontent/VisualTextualJointRelevanceLearningTagBasedSocialImage_2013.pdf |
_version_ |
1770571213540687872 |