Deep robust multilevel semantic hashing for multi-label cross-modal retrieval
Hashing based cross-modal retrieval has recently made significant progress. But straightforward embedding data from different modalities involving rich semantics into a joint Hamming space will inevitably produce false codes due to the intrinsic modality discrepancy and noises. We present a novel d...
Saved in:
Main Authors: | , , , |
---|---|
Other Authors: | |
Format: | Article |
Language: | English |
Published: |
2023
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/164098 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
id |
sg-ntu-dr.10356-164098 |
---|---|
record_format |
dspace |
spelling |
sg-ntu-dr.10356-1640982023-01-04T08:31:38Z Deep robust multilevel semantic hashing for multi-label cross-modal retrieval Song, Ge Tan, Xiaoyang Zhao, Jun Yang, Ming School of Computer Science and Engineering Engineering::Computer science and engineering Hashing Multi-Label Hashing based cross-modal retrieval has recently made significant progress. But straightforward embedding data from different modalities involving rich semantics into a joint Hamming space will inevitably produce false codes due to the intrinsic modality discrepancy and noises. We present a novel deep Robust Multilevel Semantic Hashing (RMSH) for more accurate multi-label cross-modal retrieval. It seeks to preserve fine-grained similarity among data with rich semantics,i.e., multi-label, while explicitly require distances between dissimilar points to be larger than a specific value for strong robustness. For this, we give an effective bound of this value based on the information coding-theoretic analysis, and the above goals are embodied into a margin-adaptive triplet loss. Furthermore, we introduce pseudo-codes via fusing multiple hash codes to explore seldom-seen semantics, alleviating the sparsity problem of similarity information. Experiments on three benchmarks show the validity of the derived bounds, and our method achieves state-of-the-art performance. This work is partially supported by National Science Foundation of China (61976115, 61732006, 61876087), Natural Science Foundation of Jiangsu Province (SBK2021043459), AI+ Project of NUAA (NZ2020012,56XZA18009), research project (315025305), and China Scholarship Council (201906830057). 2023-01-04T08:31:38Z 2023-01-04T08:31:38Z 2021 Journal Article Song, G., Tan, X., Zhao, J. & Yang, M. (2021). Deep robust multilevel semantic hashing for multi-label cross-modal retrieval. Pattern Recognition, 120, 108084-. https://dx.doi.org/10.1016/j.patcog.2021.108084 0031-3203 https://hdl.handle.net/10356/164098 10.1016/j.patcog.2021.108084 120 108084 en Pattern Recognition © 2021 Elsevier Ltd. All rights reserved. |
institution |
Nanyang Technological University |
building |
NTU Library |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
NTU Library |
collection |
DR-NTU |
language |
English |
topic |
Engineering::Computer science and engineering Hashing Multi-Label |
spellingShingle |
Engineering::Computer science and engineering Hashing Multi-Label Song, Ge Tan, Xiaoyang Zhao, Jun Yang, Ming Deep robust multilevel semantic hashing for multi-label cross-modal retrieval |
description |
Hashing based cross-modal retrieval has recently made significant progress. But straightforward embedding data from different modalities involving rich semantics into a joint Hamming space will inevitably
produce false codes due to the intrinsic modality discrepancy and noises. We present a novel deep Robust Multilevel Semantic Hashing (RMSH) for more accurate multi-label cross-modal retrieval. It seeks to
preserve fine-grained similarity among data with rich semantics,i.e., multi-label, while explicitly require
distances between dissimilar points to be larger than a specific value for strong robustness. For this, we
give an effective bound of this value based on the information coding-theoretic analysis, and the above
goals are embodied into a margin-adaptive triplet loss. Furthermore, we introduce pseudo-codes via fusing multiple hash codes to explore seldom-seen semantics, alleviating the sparsity problem of similarity
information. Experiments on three benchmarks show the validity of the derived bounds, and our method
achieves state-of-the-art performance. |
author2 |
School of Computer Science and Engineering |
author_facet |
School of Computer Science and Engineering Song, Ge Tan, Xiaoyang Zhao, Jun Yang, Ming |
format |
Article |
author |
Song, Ge Tan, Xiaoyang Zhao, Jun Yang, Ming |
author_sort |
Song, Ge |
title |
Deep robust multilevel semantic hashing for multi-label cross-modal retrieval |
title_short |
Deep robust multilevel semantic hashing for multi-label cross-modal retrieval |
title_full |
Deep robust multilevel semantic hashing for multi-label cross-modal retrieval |
title_fullStr |
Deep robust multilevel semantic hashing for multi-label cross-modal retrieval |
title_full_unstemmed |
Deep robust multilevel semantic hashing for multi-label cross-modal retrieval |
title_sort |
deep robust multilevel semantic hashing for multi-label cross-modal retrieval |
publishDate |
2023 |
url |
https://hdl.handle.net/10356/164098 |
_version_ |
1754611298331525120 |