Enhancing visual grounding in vision-language pre-training with position-guided text prompts
Vision-Language Pre-Training (VLP) has demonstrated remarkable potential in aligning image and text pairs, paving the way for a wide range of cross-modal learning tasks. Nevertheless, we have observed that VLP models often fall short in terms of visual grounding and localization capabilities, which...
Saved in:
Main Authors: | WANG, Alex Jinpeng, ZHOU, Pan, SHOU, Mike Zheng, YAN, Shuicheng |
---|---|
Format: | text |
Language: | English |
Published: |
Institutional Knowledge at Singapore Management University
2024
|
Subjects: | |
Online Access: | https://ink.library.smu.edu.sg/sis_research/8742 https://ink.library.smu.edu.sg/context/sis_research/article/9745/viewcontent/VisualGroundingVL_av.pdf |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Singapore Management University |
Language: | English |
Similar Items
-
Position-guided text prompt for vision-language pre-training
by: WANG, Alex Jinpeng, et al.
Published: (2023) -
Augmenting low-resource text classification with graph-grounded pre-training and prompting
by: WEN, Zhihao, et al.
Published: (2023) -
Prompt tuning on Graph-Augmented Low-Resource text classification
by: WEN, Zhihao, et al.
Published: (2024) -
Voucher abuse detection with prompt-based fine-tuning on graph neural networks
by: WEN, Zhihao, et al.
Published: (2023) -
ClusterPrompt: Cluster semantic enhanced prompt learning for new intent discovery
by: LIANG, Jinggui, et al.
Published: (2023)