Deconfounded visual grounding
We focus on the confounding bias between language and location in the visual grounding pipeline, where we find that the bias is the major visual reasoning bottleneck. For example, the grounding process is usually a trivial languagelocation association without visual reasoning, e.g., grounding any la...
Saved in:
Main Authors: | , , , , |
---|---|
Format: | text |
Language: | English |
Published: |
Institutional Knowledge at Singapore Management University
2022
|
Subjects: | |
Online Access: | https://ink.library.smu.edu.sg/sis_research/7484 https://ink.library.smu.edu.sg/context/sis_research/article/8487/viewcontent/19983_Article_Text_23996_1_2_20220628.pdf |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Singapore Management University |
Language: | English |
id |
sg-smu-ink.sis_research-8487 |
---|---|
record_format |
dspace |
spelling |
sg-smu-ink.sis_research-84872022-11-03T06:37:55Z Deconfounded visual grounding HUANG, Jianqiang QIN, Yu QI, Jiaxin SUN, Qianru ZHANG, Hanwang We focus on the confounding bias between language and location in the visual grounding pipeline, where we find that the bias is the major visual reasoning bottleneck. For example, the grounding process is usually a trivial languagelocation association without visual reasoning, e.g., grounding any language query containing sheep to the nearly central regions, due to that most queries about sheep have groundtruth locations at the image center. First, we frame the visual grounding pipeline into a causal graph, which shows the causalities among image, query, target location and underlying confounder. Through the causal graph, we know how to break the grounding bottleneck: deconfounded visual grounding. Second, to tackle the challenge that the confounder is unobserved in general, we propose a confounder-agnostic approach called: Referring Expression Deconfounder (RED), to remove the confounding bias. Third, we implement RED as a simple language attention, which can be applied in any grounding method. 2022-03-01T08:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/7484 info:doi/10.1609/aaai.v36i1.19983 https://ink.library.smu.edu.sg/context/sis_research/article/8487/viewcontent/19983_Article_Text_23996_1_2_20220628.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Computer Vision (CV) Artificial Intelligence and Robotics Graphics and Human Computer Interfaces |
institution |
Singapore Management University |
building |
SMU Libraries |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
SMU Libraries |
collection |
InK@SMU |
language |
English |
topic |
Computer Vision (CV) Artificial Intelligence and Robotics Graphics and Human Computer Interfaces |
spellingShingle |
Computer Vision (CV) Artificial Intelligence and Robotics Graphics and Human Computer Interfaces HUANG, Jianqiang QIN, Yu QI, Jiaxin SUN, Qianru ZHANG, Hanwang Deconfounded visual grounding |
description |
We focus on the confounding bias between language and location in the visual grounding pipeline, where we find that the bias is the major visual reasoning bottleneck. For example, the grounding process is usually a trivial languagelocation association without visual reasoning, e.g., grounding any language query containing sheep to the nearly central regions, due to that most queries about sheep have groundtruth locations at the image center. First, we frame the visual grounding pipeline into a causal graph, which shows the causalities among image, query, target location and underlying confounder. Through the causal graph, we know how to break the grounding bottleneck: deconfounded visual grounding. Second, to tackle the challenge that the confounder is unobserved in general, we propose a confounder-agnostic approach called: Referring Expression Deconfounder (RED), to remove the confounding bias. Third, we implement RED as a simple language attention, which can be applied in any grounding method. |
format |
text |
author |
HUANG, Jianqiang QIN, Yu QI, Jiaxin SUN, Qianru ZHANG, Hanwang |
author_facet |
HUANG, Jianqiang QIN, Yu QI, Jiaxin SUN, Qianru ZHANG, Hanwang |
author_sort |
HUANG, Jianqiang |
title |
Deconfounded visual grounding |
title_short |
Deconfounded visual grounding |
title_full |
Deconfounded visual grounding |
title_fullStr |
Deconfounded visual grounding |
title_full_unstemmed |
Deconfounded visual grounding |
title_sort |
deconfounded visual grounding |
publisher |
Institutional Knowledge at Singapore Management University |
publishDate |
2022 |
url |
https://ink.library.smu.edu.sg/sis_research/7484 https://ink.library.smu.edu.sg/context/sis_research/article/8487/viewcontent/19983_Article_Text_23996_1_2_20220628.pdf |
_version_ |
1770576355334815744 |