Learning language to symbol and language to vision mapping for visual grounding

Visual Grounding (VG) is a task of locating a specific object in an image semantically matching a given linguistic expression. The mapping of the linguistic and visual contents and the understanding of diverse linguistic expressions are the two challenges of this task. The performance of visual grou...

Full description

Saved in:
Bibliographic Details
Main Authors: He, Su, Yang, Xiaofeng, Lin, Guosheng
Other Authors: School of Computer Science and Engineering
Format: Article
Language:English
Published: 2022
Subjects:
Online Access:https://hdl.handle.net/10356/161552
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-161552
record_format dspace
spelling sg-ntu-dr.10356-1615522022-09-07T08:07:10Z Learning language to symbol and language to vision mapping for visual grounding He, Su Yang, Xiaofeng Lin, Guosheng School of Computer Science and Engineering Engineering::Computer science and engineering Cross Modality Visual Grounding Visual Grounding (VG) is a task of locating a specific object in an image semantically matching a given linguistic expression. The mapping of the linguistic and visual contents and the understanding of diverse linguistic expressions are the two challenges of this task. The performance of visual grounding is consistently improved by deep visual features in the last few years. While deep visual features contain rich information, they could also be noisy, biased and easily over-fitted. In contrast, symbolic features are discrete, easy to map and usually less noisy. In this work, we propose a novel modular network learning to match both the object's symbolic features and conventional visual features with the linguistic information. Moreover, the Residual Attention Parser is designed to alleviate the difficulty of understanding diverse expressions. Our model achieves competitive performance on three popular datasets of VG. Ministry of Education (MOE) National Research Foundation (NRF) Submitted/Accepted version This research is supported by the National Research Foundation, Singapore under its AI Singapore Programme (AISG Award No: AISG-RP-2018-003), and the MOE AcRF Tier-1 research grants: RG28/18 (S), RG22/19 (S) and RG95/20. 2022-09-07T08:07:10Z 2022-09-07T08:07:10Z 2022 Journal Article He, S., Yang, X. & Lin, G. (2022). Learning language to symbol and language to vision mapping for visual grounding. Image and Vision Computing, 122, 104451-. https://dx.doi.org/10.1016/j.imavis.2022.104451 0262-8856 https://hdl.handle.net/10356/161552 10.1016/j.imavis.2022.104451 2-s2.0-85129282653 122 104451 en AISG-RP-2018-003 RG28/18 (S) RG22/19 (S) RG95/20 Image and Vision Computing © 2022 Elsevier B.V. All rights reserved. application/pdf
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Engineering::Computer science and engineering
Cross Modality
Visual Grounding
spellingShingle Engineering::Computer science and engineering
Cross Modality
Visual Grounding
He, Su
Yang, Xiaofeng
Lin, Guosheng
Learning language to symbol and language to vision mapping for visual grounding
description Visual Grounding (VG) is a task of locating a specific object in an image semantically matching a given linguistic expression. The mapping of the linguistic and visual contents and the understanding of diverse linguistic expressions are the two challenges of this task. The performance of visual grounding is consistently improved by deep visual features in the last few years. While deep visual features contain rich information, they could also be noisy, biased and easily over-fitted. In contrast, symbolic features are discrete, easy to map and usually less noisy. In this work, we propose a novel modular network learning to match both the object's symbolic features and conventional visual features with the linguistic information. Moreover, the Residual Attention Parser is designed to alleviate the difficulty of understanding diverse expressions. Our model achieves competitive performance on three popular datasets of VG.
author2 School of Computer Science and Engineering
author_facet School of Computer Science and Engineering
He, Su
Yang, Xiaofeng
Lin, Guosheng
format Article
author He, Su
Yang, Xiaofeng
Lin, Guosheng
author_sort He, Su
title Learning language to symbol and language to vision mapping for visual grounding
title_short Learning language to symbol and language to vision mapping for visual grounding
title_full Learning language to symbol and language to vision mapping for visual grounding
title_fullStr Learning language to symbol and language to vision mapping for visual grounding
title_full_unstemmed Learning language to symbol and language to vision mapping for visual grounding
title_sort learning language to symbol and language to vision mapping for visual grounding
publishDate 2022
url https://hdl.handle.net/10356/161552
_version_ 1744365410141077504