Position-guided text prompt for vision-language pre-training
Vision-Language Pre-Training (VLP) has shown promising capabilities to align image and text pairs, facilitating a broad variety of cross-modal learning tasks. However, we observe that VLP models often lack the visual grounding/localization capability which is critical for many downstream tasks such...
Saved in:
Main Authors: | WANG, Alex Jinpeng, ZHOU, Pan, SHOU, Mike Zheng, YAN Shuicheng |
---|---|
Format: | text |
Language: | English |
Published: |
Institutional Knowledge at Singapore Management University
2023
|
Subjects: | |
Online Access: | https://ink.library.smu.edu.sg/sis_research/9021 https://ink.library.smu.edu.sg/context/sis_research/article/10024/viewcontent/2023_CVPR_PTP.pdf |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Singapore Management University |
Language: | English |
Similar Items
-
Enhancing visual grounding in vision-language pre-training with position-guided text prompts
by: WANG, Alex Jinpeng, et al.
Published: (2024) -
LPT: Long-tailed prompt tuning for image classification
by: DONG, Bowen, et al.
Published: (2023) -
Let’s think outside the box: Exploring leap-of-thought in large language models with multimodal humor generation
by: ZHONG, Shanshan, et al.
Published: (2024) -
CgT-GAN: CLIP-guided text GAN for image captioning
by: YU, Jiarui, et al.
Published: (2023) -
MultiGPrompt for multi-task pre-training and prompting on graphs
by: YU, Xingtong, et al.
Published: (2024)