Learning language to symbol and language to vision mapping for visual grounding

Visual Grounding (VG) is a task of locating a specific object in an image semantically matching a given linguistic expression. The mapping of the linguistic and visual contents and the understanding of diverse linguistic expressions are the two challenges of this task. The performance of visual grou...

وصف كامل

محفوظ في:
التفاصيل البيبلوغرافية
المؤلفون الرئيسيون: He, Su, Yang, Xiaofeng, Lin, Guosheng
مؤلفون آخرون: School of Computer Science and Engineering
التنسيق: مقال
اللغة:English
منشور في: 2022
الموضوعات:
الوصول للمادة أونلاين:https://hdl.handle.net/10356/161552
الوسوم: إضافة وسم
لا توجد وسوم, كن أول من يضع وسما على هذه التسجيلة!
المؤسسة: Nanyang Technological University
اللغة: English
الوصف
الملخص:Visual Grounding (VG) is a task of locating a specific object in an image semantically matching a given linguistic expression. The mapping of the linguistic and visual contents and the understanding of diverse linguistic expressions are the two challenges of this task. The performance of visual grounding is consistently improved by deep visual features in the last few years. While deep visual features contain rich information, they could also be noisy, biased and easily over-fitted. In contrast, symbolic features are discrete, easy to map and usually less noisy. In this work, we propose a novel modular network learning to match both the object's symbolic features and conventional visual features with the linguistic information. Moreover, the Residual Attention Parser is designed to alleviate the difficulty of understanding diverse expressions. Our model achieves competitive performance on three popular datasets of VG.