CgT-GAN: CLIP-guided text GAN for image captioning
The large-scale visual-language pre-trained model, Contrastive Language-Image Pre-training (CLIP), has significantly improved image captioning for scenarios without human-annotated image-caption pairs. Recent advanced CLIP-based image captioning without human annotations follows a text-only training...
Saved in:
Main Authors: | YU, Jiarui, LI, Haoran, HAO, Yanbin, ZHU, Bin, XU, Tong, HE, Xiangnan |
---|---|
Format: | text |
Language: | English |
Published: |
Institutional Knowledge at Singapore Management University
2023
|
Subjects: | |
Online Access: | https://ink.library.smu.edu.sg/sis_research/9012 https://ink.library.smu.edu.sg/context/sis_research/article/10015/viewcontent/CgT_GAN.pdf |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Singapore Management University |
Language: | English |
Similar Items
-
Improving GAN training with probability ratio clipping and sample reweighting
by: WU, Yue, et al.
Published: (2020) -
PERSONALIZED VISUAL INFORMATION CAPTIONING
by: WU SHUANG
Published: (2023) -
Context-aware visual policy network for fine-grained image captioning
by: Zha, Zheng-Jun, et al.
Published: (2022) -
Clip-based similarity measure for hierarchical video retrieval
by: PENG, Yuxin, et al.
Published: (2004) -
Approach for video retrieval by video clip
by: PENG, Y., et al.
Published: (2003)