Paired cross-modal data augmentation for fine-grained image-to-text retrieval
This paper investigates an open research problem of generating text-image pairs to improve the training of fine-grained image-to-text cross-modal retrieval task, and proposes a novel framework for paired data augmentation by uncovering the hidden semantic information of StyleGAN2 model. Specific...
Saved in:
Main Authors: | , , , |
---|---|
Other Authors: | |
Format: | Conference or Workshop Item |
Language: | English |
Published: |
2023
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/164145 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | This paper investigates an open research problem of generating
text-image pairs to improve the training of fine-grained image-to-text
cross-modal retrieval task, and proposes a novel framework
for paired data augmentation by uncovering the hidden semantic
information of StyleGAN2 model. Specifically, we first train a
StyleGAN2 model on the given dataset. We then project the real
images back to the latent space of StyleGAN2 to obtain the latent
codes. To make the generated images manipulatable, we further
introduce a latent space alignment module to learn the alignment
between StyleGAN2 latent codes and the corresponding textual
caption features. When we do online paired data augmentation, we
first generate augmented text through random token replacement,
then pass the augmented text into the latent space alignment module
to output the latent codes, which are finally fed to StyleGAN2
to generate the augmented images. We evaluate the efficacy of
our augmented data approach on two public cross-modal retrieval
datasets, in which the promising experimental results demonstrate
the augmented text-image pair data can be trained together with
the original data to boost the image-to-text cross-modal retrieval
performance. |
---|