Unsupervised multi-graph cross-modal hashing for large-scale multimedia retrieval
With the advance of internet and multimedia technologies, large-scale multi-modal representation techniques such as cross-modal hashing, are increasingly demanded for multimedia retrieval. In cross-modal hashing, three essential problems should be seriously considered. The first is that effective cr...
Saved in:
Main Authors: | , , |
---|---|
Format: | text |
Language: | English |
Published: |
Institutional Knowledge at Singapore Management University
2016
|
Subjects: | |
Online Access: | https://ink.library.smu.edu.sg/sis_research/4437 https://ink.library.smu.edu.sg/context/sis_research/article/5440/viewcontent/Unsupervised_multi_graph_cross_modal_2016_av.pdf |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Singapore Management University |
Language: | English |
id |
sg-smu-ink.sis_research-5440 |
---|---|
record_format |
dspace |
spelling |
sg-smu-ink.sis_research-54402019-10-07T08:45:37Z Unsupervised multi-graph cross-modal hashing for large-scale multimedia retrieval XIE, Liang ZHU, Lei CHEN, Guoqi With the advance of internet and multimedia technologies, large-scale multi-modal representation techniques such as cross-modal hashing, are increasingly demanded for multimedia retrieval. In cross-modal hashing, three essential problems should be seriously considered. The first is that effective cross-modal relationship should be learned from training data with scarce label information. The second is that appropriate weights should be assigned for different modalities to reflect their importance. The last is the scalability of training process which is usually ignored by previous methods. In this paper, we propose Multi-graph Cross-modal Hashing (MGCMH) by comprehensively considering these three points. MGCMH is unsupervised method which integrates multi-graph learning and hash function learning into a joint framework, to learn unified hash space for all modalities. In MGCMH, different modalities are assigned with proper weights for the generation of multi-graph and hash codes respectively. As a result, more precise cross-modal relationship can be preserved in the hash space. Then Nyström approximation approach is leveraged to efficiently construct the graphs. Finally an alternating learning algorithm is proposed to jointly optimize the modality weights, hash codes and functions. Experiments conducted on two real-world multi-modal datasets demonstrate the effectiveness of our method, in comparison with several representative cross-modal hashing methods. 2016-08-01T07:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/4437 info:doi/10.1007/s11042-016-3432-0 https://ink.library.smu.edu.sg/context/sis_research/article/5440/viewcontent/Unsupervised_multi_graph_cross_modal_2016_av.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Cross-modal hashing Multi-graph learning Cross-media retrieval Computer Sciences Databases and Information Systems Numerical Analysis and Scientific Computing |
institution |
Singapore Management University |
building |
SMU Libraries |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
SMU Libraries |
collection |
InK@SMU |
language |
English |
topic |
Cross-modal hashing Multi-graph learning Cross-media retrieval Computer Sciences Databases and Information Systems Numerical Analysis and Scientific Computing |
spellingShingle |
Cross-modal hashing Multi-graph learning Cross-media retrieval Computer Sciences Databases and Information Systems Numerical Analysis and Scientific Computing XIE, Liang ZHU, Lei CHEN, Guoqi Unsupervised multi-graph cross-modal hashing for large-scale multimedia retrieval |
description |
With the advance of internet and multimedia technologies, large-scale multi-modal representation techniques such as cross-modal hashing, are increasingly demanded for multimedia retrieval. In cross-modal hashing, three essential problems should be seriously considered. The first is that effective cross-modal relationship should be learned from training data with scarce label information. The second is that appropriate weights should be assigned for different modalities to reflect their importance. The last is the scalability of training process which is usually ignored by previous methods. In this paper, we propose Multi-graph Cross-modal Hashing (MGCMH) by comprehensively considering these three points. MGCMH is unsupervised method which integrates multi-graph learning and hash function learning into a joint framework, to learn unified hash space for all modalities. In MGCMH, different modalities are assigned with proper weights for the generation of multi-graph and hash codes respectively. As a result, more precise cross-modal relationship can be preserved in the hash space. Then Nyström approximation approach is leveraged to efficiently construct the graphs. Finally an alternating learning algorithm is proposed to jointly optimize the modality weights, hash codes and functions. Experiments conducted on two real-world multi-modal datasets demonstrate the effectiveness of our method, in comparison with several representative cross-modal hashing methods. |
format |
text |
author |
XIE, Liang ZHU, Lei CHEN, Guoqi |
author_facet |
XIE, Liang ZHU, Lei CHEN, Guoqi |
author_sort |
XIE, Liang |
title |
Unsupervised multi-graph cross-modal hashing for large-scale multimedia retrieval |
title_short |
Unsupervised multi-graph cross-modal hashing for large-scale multimedia retrieval |
title_full |
Unsupervised multi-graph cross-modal hashing for large-scale multimedia retrieval |
title_fullStr |
Unsupervised multi-graph cross-modal hashing for large-scale multimedia retrieval |
title_full_unstemmed |
Unsupervised multi-graph cross-modal hashing for large-scale multimedia retrieval |
title_sort |
unsupervised multi-graph cross-modal hashing for large-scale multimedia retrieval |
publisher |
Institutional Knowledge at Singapore Management University |
publishDate |
2016 |
url |
https://ink.library.smu.edu.sg/sis_research/4437 https://ink.library.smu.edu.sg/context/sis_research/article/5440/viewcontent/Unsupervised_multi_graph_cross_modal_2016_av.pdf |
_version_ |
1770574795582210048 |