Moving towards centers: re-ranking with attention and memory for re-identification
Re-ranking utilizes contextual information to optimize the initial ranking list of person or vehicle re-identification (re-ID), which boosts the retrieval performance at post-processing steps. This paper proposes a re-ranking network to predict the correlations between the probe and top-ranked neigh...
Saved in:
Main Authors: | , , |
---|---|
Other Authors: | |
Format: | Article |
Language: | English |
Published: |
2022
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/162961 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
id |
sg-ntu-dr.10356-162961 |
---|---|
record_format |
dspace |
spelling |
sg-ntu-dr.10356-1629612022-11-14T01:57:30Z Moving towards centers: re-ranking with attention and memory for re-identification Zhou, Yunhao Wang, Yi Chau, Lap-Pui School of Electrical and Electronic Engineering Engineering::Electrical and electronic engineering Re-Identification Transformer Re-ranking utilizes contextual information to optimize the initial ranking list of person or vehicle re-identification (re-ID), which boosts the retrieval performance at post-processing steps. This paper proposes a re-ranking network to predict the correlations between the probe and top-ranked neighbor samples. Specifically, all the feature embeddings of query and gallery images are expanded and enhanced by a linear combination of their neighbors, with the correlation prediction serves as discriminative combination weights. The combination process is equivalent to moving independent embeddings toward the identity centers, improving cluster compactness. For correlation prediction, we first aggregate the contextual information for probes k-nearest neighbors via the Transformer encoder. Then, we distill and refine the probe-related features into the Contextual Memory cell via attention mechanism. Like humans that retrieve images by not only considering probe images but also memorizing the retrieved ones, the Contextual Memory produces multiview descriptions for each instance. Finally, the neighbors are reconstructed with features fetched from the Contextual Memory, and a binary classifier predicts their correlations with the probe. Experiments on six widely-used person and vehicle re-ID benchmarks demonstrate the effectiveness of the proposed method. Especially, our method surpasses the state-of-the-art re-ranking approaches on large-scale datasets by a significant margin, i.e., with an average 3.08% CMC@1 and 7.46% mAP improvements on VERI-Wild, MSMT17, and VehicleID datasets. 2022-11-14T01:57:29Z 2022-11-14T01:57:29Z 2022 Journal Article Zhou, Y., Wang, Y. & Chau, L. (2022). Moving towards centers: re-ranking with attention and memory for re-identification. IEEE Transactions On Multimedia, 3161189-. https://dx.doi.org/10.1109/TMM.2022.3161189 1520-9210 https://hdl.handle.net/10356/162961 10.1109/TMM.2022.3161189 2-s2.0-85127055338 3161189 en IEEE Transactions on Multimedia © 2021 IEEE. All rights reserved. |
institution |
Nanyang Technological University |
building |
NTU Library |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
NTU Library |
collection |
DR-NTU |
language |
English |
topic |
Engineering::Electrical and electronic engineering Re-Identification Transformer |
spellingShingle |
Engineering::Electrical and electronic engineering Re-Identification Transformer Zhou, Yunhao Wang, Yi Chau, Lap-Pui Moving towards centers: re-ranking with attention and memory for re-identification |
description |
Re-ranking utilizes contextual information to optimize the initial ranking list of person or vehicle re-identification (re-ID), which boosts the retrieval performance at post-processing steps. This paper proposes a re-ranking network to predict the correlations between the probe and top-ranked neighbor samples. Specifically, all the feature embeddings of query and gallery images are expanded and enhanced by a linear combination of their neighbors, with the correlation prediction serves as discriminative combination weights. The combination process is equivalent to moving independent embeddings toward the identity centers, improving cluster compactness. For correlation prediction, we first aggregate the contextual information for probes k-nearest neighbors via the Transformer encoder. Then, we distill and refine the probe-related features into the Contextual Memory cell via attention mechanism. Like humans that retrieve images by not only considering probe images but also memorizing the retrieved ones, the Contextual Memory produces multiview descriptions for each instance. Finally, the neighbors are reconstructed with features fetched from the Contextual Memory, and a binary classifier predicts their correlations with the probe. Experiments on six widely-used person and vehicle re-ID benchmarks demonstrate the effectiveness of the proposed method. Especially, our method surpasses the state-of-the-art re-ranking approaches on large-scale datasets by a significant margin, i.e., with an average 3.08% CMC@1 and 7.46% mAP improvements on VERI-Wild, MSMT17, and VehicleID datasets. |
author2 |
School of Electrical and Electronic Engineering |
author_facet |
School of Electrical and Electronic Engineering Zhou, Yunhao Wang, Yi Chau, Lap-Pui |
format |
Article |
author |
Zhou, Yunhao Wang, Yi Chau, Lap-Pui |
author_sort |
Zhou, Yunhao |
title |
Moving towards centers: re-ranking with attention and memory for re-identification |
title_short |
Moving towards centers: re-ranking with attention and memory for re-identification |
title_full |
Moving towards centers: re-ranking with attention and memory for re-identification |
title_fullStr |
Moving towards centers: re-ranking with attention and memory for re-identification |
title_full_unstemmed |
Moving towards centers: re-ranking with attention and memory for re-identification |
title_sort |
moving towards centers: re-ranking with attention and memory for re-identification |
publishDate |
2022 |
url |
https://hdl.handle.net/10356/162961 |
_version_ |
1751548546241265664 |