Cross-Modal Self-Taught Hashing for large-scale image retrieval
Cross-modal hashing integrates the advantages of traditional cross-modal retrieval and hashing, it can solve large-scale cross-modal retrieval effectively and efficiently. However, existing cross-modal hashing methods rely on either labeled training data, or lack semantic analysis. In this paper, we...
Saved in:
Main Authors: | XIE, Liang, ZHU, Lei, PAN, Peng, LU, Yansheng |
---|---|
Format: | text |
Language: | English |
Published: |
Institutional Knowledge at Singapore Management University
2016
|
Subjects: | |
Online Access: | https://ink.library.smu.edu.sg/sis_research/3587 https://ink.library.smu.edu.sg/context/sis_research/article/4588/viewcontent/cross_modal__1_.pdf |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Singapore Management University |
Language: | English |
Similar Items
-
Learning a cross-modal hashing network for multimedia search
by: Tan, Yap Peng, et al.
Published: (2018) -
Alleviating the inconsistency of multimodal data in cross-modal retrieval
by: Li, Tieying, et al.
Published: (2024) -
Unsupervised multi-graph cross-modal hashing for large-scale multimedia retrieval
by: XIE, Liang, et al.
Published: (2016) -
Online cross-modal hashing for web image retrieval
by: XIE, Liang, et al.
Published: (2016) -
Cross-modal recipe retrieval with stacked attention model
by: CHEN, Jing-Jing, et al.
Published: (2018)