Grounding referring expressions in images by variational context

We focus on grounding (i.e., localizing or linking) referring expressions in images, e.g., 'largest elephant standing behind baby elephant'. This is a general yet challenging vision-language task since it does not only require the localization of objects, but also the multimodal comprehens...

Full description

Saved in:
Bibliographic Details
Main Authors: Zhang, Hanwang, Niu, Yulei, Chang, Shih-Fu
Other Authors: School of Computer Science and Engineering
Format: Conference or Workshop Item
Language:English
Published: 2020
Subjects:
Online Access:https://hdl.handle.net/10356/143054
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-143054
record_format dspace
spelling sg-ntu-dr.10356-1430542020-07-23T06:56:06Z Grounding referring expressions in images by variational context Zhang, Hanwang Niu, Yulei Chang, Shih-Fu School of Computer Science and Engineering 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Engineering::Computer science and engineering Grounding Context Modeling We focus on grounding (i.e., localizing or linking) referring expressions in images, e.g., 'largest elephant standing behind baby elephant'. This is a general yet challenging vision-language task since it does not only require the localization of objects, but also the multimodal comprehension of context - visual attributes (e.g., 'largest', 'baby') and relationships (e.g., 'behind') that help to distinguish the referent from other objects, especially those of the same category. Due to the exponential complexity involved in modeling the context associated with multiple image regions, existing work oversimplifies this task to pairwise region modeling by multiple instance learning. In this paper, we propose a variational Bayesian method, called Variational Context, to solve the problem of complex context modeling in referring expression grounding. Our model exploits the reciprocal relation between the referent and context, i.e., either of them influences estimation of the posterior distribution of the other, and thereby the search space of context can be greatly reduced. We also extend the model to unsupervised setting where no annotation for the referent is available. Extensive experiments on various benchmarks show consistent improvement over state-of-the-art methods in both supervised and unsupervised settings. The code is available at https://github.com/yuleiniu/vc/. Accepted version 2020-07-23T06:56:06Z 2020-07-23T06:56:06Z 2018 Conference Paper Zhang, H., Niu, Y., & Chang, S.-F. (2018). Grounding referring expressions in images by variational context. Proceedings of 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 4158-4166. doi:10.1109/cvpr.2018.00437 978-1-5386-6421-6 https://hdl.handle.net/10356/143054 10.1109/cvpr.2018.00437 2-s2.0-85062863751 4158 4166 en © 2018 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. The published version is available at: https://doi.org/10.1109/cvpr.2018.00437 application/pdf
institution Nanyang Technological University
building NTU Library
country Singapore
collection DR-NTU
language English
topic Engineering::Computer science and engineering
Grounding
Context Modeling
spellingShingle Engineering::Computer science and engineering
Grounding
Context Modeling
Zhang, Hanwang
Niu, Yulei
Chang, Shih-Fu
Grounding referring expressions in images by variational context
description We focus on grounding (i.e., localizing or linking) referring expressions in images, e.g., 'largest elephant standing behind baby elephant'. This is a general yet challenging vision-language task since it does not only require the localization of objects, but also the multimodal comprehension of context - visual attributes (e.g., 'largest', 'baby') and relationships (e.g., 'behind') that help to distinguish the referent from other objects, especially those of the same category. Due to the exponential complexity involved in modeling the context associated with multiple image regions, existing work oversimplifies this task to pairwise region modeling by multiple instance learning. In this paper, we propose a variational Bayesian method, called Variational Context, to solve the problem of complex context modeling in referring expression grounding. Our model exploits the reciprocal relation between the referent and context, i.e., either of them influences estimation of the posterior distribution of the other, and thereby the search space of context can be greatly reduced. We also extend the model to unsupervised setting where no annotation for the referent is available. Extensive experiments on various benchmarks show consistent improvement over state-of-the-art methods in both supervised and unsupervised settings. The code is available at https://github.com/yuleiniu/vc/.
author2 School of Computer Science and Engineering
author_facet School of Computer Science and Engineering
Zhang, Hanwang
Niu, Yulei
Chang, Shih-Fu
format Conference or Workshop Item
author Zhang, Hanwang
Niu, Yulei
Chang, Shih-Fu
author_sort Zhang, Hanwang
title Grounding referring expressions in images by variational context
title_short Grounding referring expressions in images by variational context
title_full Grounding referring expressions in images by variational context
title_fullStr Grounding referring expressions in images by variational context
title_full_unstemmed Grounding referring expressions in images by variational context
title_sort grounding referring expressions in images by variational context
publishDate 2020
url https://hdl.handle.net/10356/143054
_version_ 1681059692108840960