Causal attention for unbiased visual recognition

Attention module does not always help deep models learn causal features that are robust in any confounding context, e.g., a foreground object feature is invariant to different backgrounds. This is because the confounders trick the attention to capture spurious correlations that benefit the predictio...

Full description

Saved in:
Bibliographic Details
Main Authors: WANG, Tan, ZHOU, Chang, SUN, Qianru, ZHANG, Hanwang
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2021
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/6228
https://ink.library.smu.edu.sg/context/sis_research/article/7231/viewcontent/Wang_Causal_Attention_for_Unbiased_Visual_Recognition_ICCV_2021_paper.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
id sg-smu-ink.sis_research-7231
record_format dspace
spelling sg-smu-ink.sis_research-72312021-10-22T05:57:53Z Causal attention for unbiased visual recognition WANG, Tan ZHOU, Chang SUN, Qianru ZHANG, Hanwang Attention module does not always help deep models learn causal features that are robust in any confounding context, e.g., a foreground object feature is invariant to different backgrounds. This is because the confounders trick the attention to capture spurious correlations that benefit the prediction when the training and testing data are IID (identical & independent distribution); while harm the prediction when the data are OOD (out-of-distribution). The sole fundamental solution to learn causal attention is by causal intervention, which requires additional annotations of the confounders, e.g., a “dog” model is learned within “grass+dog” and “road+dog” respectively, so the “grass” and “road” contexts will no longer confound the “dog” recognition. However, such annotation is not only prohibitively expensive, but also inherently problematic, as the confounders are elusive in nature. In this paper, we propose a causal attention module (CaaM) that self-annotates the confounders in unsupervised fashion. In particular, multiple CaaMs can be stacked and integrated in conventional attention CNN and self-attention Vision Transformer. In OOD settings, deep models with CaaM outperform those without it significantly; even in IID settings, the attention localization is also improved by CaaM, showing a great potential in applications that require robust visual saliency. Codes are available at https://github.com/ Wangt-CN/CaaM. 2021-10-01T07:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/6228 https://ink.library.smu.edu.sg/context/sis_research/article/7231/viewcontent/Wang_Causal_Attention_for_Unbiased_Visual_Recognition_ICCV_2021_paper.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Graphics and Human Computer Interfaces
institution Singapore Management University
building SMU Libraries
continent Asia
country Singapore
Singapore
content_provider SMU Libraries
collection InK@SMU
language English
topic Graphics and Human Computer Interfaces
spellingShingle Graphics and Human Computer Interfaces
WANG, Tan
ZHOU, Chang
SUN, Qianru
ZHANG, Hanwang
Causal attention for unbiased visual recognition
description Attention module does not always help deep models learn causal features that are robust in any confounding context, e.g., a foreground object feature is invariant to different backgrounds. This is because the confounders trick the attention to capture spurious correlations that benefit the prediction when the training and testing data are IID (identical & independent distribution); while harm the prediction when the data are OOD (out-of-distribution). The sole fundamental solution to learn causal attention is by causal intervention, which requires additional annotations of the confounders, e.g., a “dog” model is learned within “grass+dog” and “road+dog” respectively, so the “grass” and “road” contexts will no longer confound the “dog” recognition. However, such annotation is not only prohibitively expensive, but also inherently problematic, as the confounders are elusive in nature. In this paper, we propose a causal attention module (CaaM) that self-annotates the confounders in unsupervised fashion. In particular, multiple CaaMs can be stacked and integrated in conventional attention CNN and self-attention Vision Transformer. In OOD settings, deep models with CaaM outperform those without it significantly; even in IID settings, the attention localization is also improved by CaaM, showing a great potential in applications that require robust visual saliency. Codes are available at https://github.com/ Wangt-CN/CaaM.
format text
author WANG, Tan
ZHOU, Chang
SUN, Qianru
ZHANG, Hanwang
author_facet WANG, Tan
ZHOU, Chang
SUN, Qianru
ZHANG, Hanwang
author_sort WANG, Tan
title Causal attention for unbiased visual recognition
title_short Causal attention for unbiased visual recognition
title_full Causal attention for unbiased visual recognition
title_fullStr Causal attention for unbiased visual recognition
title_full_unstemmed Causal attention for unbiased visual recognition
title_sort causal attention for unbiased visual recognition
publisher Institutional Knowledge at Singapore Management University
publishDate 2021
url https://ink.library.smu.edu.sg/sis_research/6228
https://ink.library.smu.edu.sg/context/sis_research/article/7231/viewcontent/Wang_Causal_Attention_for_Unbiased_Visual_Recognition_ICCV_2021_paper.pdf
_version_ 1770575895523753984