Enhancing visual grounding in vision-language pre-training with position-guided text prompts
Vision-Language Pre-Training (VLP) has demonstrated remarkable potential in aligning image and text pairs, paving the way for a wide range of cross-modal learning tasks. Nevertheless, we have observed that VLP models often fall short in terms of visual grounding and localization capabilities, which...
Saved in:
Main Authors: | , , , |
---|---|
Format: | text |
Language: | English |
Published: |
Institutional Knowledge at Singapore Management University
2024
|
Subjects: | |
Online Access: | https://ink.library.smu.edu.sg/sis_research/8742 https://ink.library.smu.edu.sg/context/sis_research/article/9745/viewcontent/VisualGroundingVL_av.pdf |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Singapore Management University |
Language: | English |
id |
sg-smu-ink.sis_research-9745 |
---|---|
record_format |
dspace |
spelling |
sg-smu-ink.sis_research-97452024-05-03T07:50:11Z Enhancing visual grounding in vision-language pre-training with position-guided text prompts WANG, Alex Jinpeng ZHOU, Pan SHOU, Mike Zheng YAN, Shuicheng Vision-Language Pre-Training (VLP) has demonstrated remarkable potential in aligning image and text pairs, paving the way for a wide range of cross-modal learning tasks. Nevertheless, we have observed that VLP models often fall short in terms of visual grounding and localization capabilities, which are crucial for many downstream tasks, such as visual reasoning. In response, we introduce a novel Position-guided Text Prompt ( PTP ) paradigm to bolster the visual grounding abilities of cross-modal models trained with VLP. In the VLP phase, PTP divides an image into N x N blocks and employs a widely-used object detector to identify objects within each block. PTP then reframes the visual grounding task as a fill-in-the-blank problem, encouraging the model to predict objects in given blocks or regress the blocks of a given object, exemplified by filling “ [P] ” or “ [O] ” in a PTP sentence such as “ The block [P] has a [O]. ” This strategy enhances the visual grounding capabilities of VLP models, enabling them to better tackle various downstream tasks. Additionally, we integrate the seconda-order relationships between objects to further enhance the visual grounding capabilities of our proposed PTP paradigm. Incorporating PTP into several state-of-the-art VLP frameworks leads to consistently significant improvements across representative cross-modal learning model architectures and multiple benchmarks, such as zero-shot Flickr30 k Retrieval (+5.6 in average recall@1) for ViLT baseline, and COCO Captioning (+5.5 in CIDEr) for the state-of-the-art BLIP baseline. Furthermore, PTP attains comparable results with object-detector-based methods and a faster inference speed, as it discards its object detector during inference, unlike other approaches. 2024-05-01T07:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/8742 info:doi/10.1109/TPAMI.2023.3343736 https://ink.library.smu.edu.sg/context/sis_research/article/9745/viewcontent/VisualGroundingVL_av.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Fill-in-the-blank position-guided text prompt vision-language pre-training visual grounding Artificial Intelligence and Robotics Numerical Analysis and Scientific Computing Programming Languages and Compilers |
institution |
Singapore Management University |
building |
SMU Libraries |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
SMU Libraries |
collection |
InK@SMU |
language |
English |
topic |
Fill-in-the-blank position-guided text prompt vision-language pre-training visual grounding Artificial Intelligence and Robotics Numerical Analysis and Scientific Computing Programming Languages and Compilers |
spellingShingle |
Fill-in-the-blank position-guided text prompt vision-language pre-training visual grounding Artificial Intelligence and Robotics Numerical Analysis and Scientific Computing Programming Languages and Compilers WANG, Alex Jinpeng ZHOU, Pan SHOU, Mike Zheng YAN, Shuicheng Enhancing visual grounding in vision-language pre-training with position-guided text prompts |
description |
Vision-Language Pre-Training (VLP) has demonstrated remarkable potential in aligning image and text pairs, paving the way for a wide range of cross-modal learning tasks. Nevertheless, we have observed that VLP models often fall short in terms of visual grounding and localization capabilities, which are crucial for many downstream tasks, such as visual reasoning. In response, we introduce a novel Position-guided Text Prompt ( PTP ) paradigm to bolster the visual grounding abilities of cross-modal models trained with VLP. In the VLP phase, PTP divides an image into N x N blocks and employs a widely-used object detector to identify objects within each block. PTP then reframes the visual grounding task as a fill-in-the-blank problem, encouraging the model to predict objects in given blocks or regress the blocks of a given object, exemplified by filling “ [P] ” or “ [O] ” in a PTP sentence such as “ The block [P] has a [O]. ” This strategy enhances the visual grounding capabilities of VLP models, enabling them to better tackle various downstream tasks. Additionally, we integrate the seconda-order relationships between objects to further enhance the visual grounding capabilities of our proposed PTP paradigm. Incorporating PTP into several state-of-the-art VLP frameworks leads to consistently significant improvements across representative cross-modal learning model architectures and multiple benchmarks, such as zero-shot Flickr30 k Retrieval (+5.6 in average recall@1) for ViLT baseline, and COCO Captioning (+5.5 in CIDEr) for the state-of-the-art BLIP baseline. Furthermore, PTP attains comparable results with object-detector-based methods and a faster inference speed, as it discards its object detector during inference, unlike other approaches. |
format |
text |
author |
WANG, Alex Jinpeng ZHOU, Pan SHOU, Mike Zheng YAN, Shuicheng |
author_facet |
WANG, Alex Jinpeng ZHOU, Pan SHOU, Mike Zheng YAN, Shuicheng |
author_sort |
WANG, Alex Jinpeng |
title |
Enhancing visual grounding in vision-language pre-training with position-guided text prompts |
title_short |
Enhancing visual grounding in vision-language pre-training with position-guided text prompts |
title_full |
Enhancing visual grounding in vision-language pre-training with position-guided text prompts |
title_fullStr |
Enhancing visual grounding in vision-language pre-training with position-guided text prompts |
title_full_unstemmed |
Enhancing visual grounding in vision-language pre-training with position-guided text prompts |
title_sort |
enhancing visual grounding in vision-language pre-training with position-guided text prompts |
publisher |
Institutional Knowledge at Singapore Management University |
publishDate |
2024 |
url |
https://ink.library.smu.edu.sg/sis_research/8742 https://ink.library.smu.edu.sg/context/sis_research/article/9745/viewcontent/VisualGroundingVL_av.pdf |
_version_ |
1814047499231428608 |