Learning language to symbol and language to vision mapping for visual grounding

Visual Grounding (VG) is a task of locating a specific object in an image semantically matching a given linguistic expression. The mapping of the linguistic and visual contents and the understanding of diverse linguistic expressions are the two challenges of this task. The performance of visual grou...

全面介紹

Saved in:
書目詳細資料
Main Authors: He, Su, Yang, Xiaofeng, Lin, Guosheng
其他作者: School of Computer Science and Engineering
格式: Article
語言:English
出版: 2022
主題:
在線閱讀:https://hdl.handle.net/10356/161552
標簽: 添加標簽
沒有標簽, 成為第一個標記此記錄!
實物特徵
總結:Visual Grounding (VG) is a task of locating a specific object in an image semantically matching a given linguistic expression. The mapping of the linguistic and visual contents and the understanding of diverse linguistic expressions are the two challenges of this task. The performance of visual grounding is consistently improved by deep visual features in the last few years. While deep visual features contain rich information, they could also be noisy, biased and easily over-fitted. In contrast, symbolic features are discrete, easy to map and usually less noisy. In this work, we propose a novel modular network learning to match both the object's symbolic features and conventional visual features with the linguistic information. Moreover, the Residual Attention Parser is designed to alleviate the difficulty of understanding diverse expressions. Our model achieves competitive performance on three popular datasets of VG.