Counterfactual samples synthesizing and training for robust visual question answering

Today's VQA models still tend to capture superficial linguistic correlations in the training set and fail to generalize to the test set with different QA distributions. To reduce these language biases, recent VQA works introduce an auxiliary question-only model to regularize the training of tar...

Full description

Saved in:
Bibliographic Details
Main Authors: Chen, Long, Zheng, Yuhang, Niu, Yulei, Zhang, Hanwang, Xiao, Jun
Other Authors: School of Computer Science and Engineering
Format: Article
Language:English
Published: 2023
Subjects:
Online Access:https://hdl.handle.net/10356/171830
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-171830
record_format dspace
spelling sg-ntu-dr.10356-1718302023-11-09T04:11:18Z Counterfactual samples synthesizing and training for robust visual question answering Chen, Long Zheng, Yuhang Niu, Yulei Zhang, Hanwang Xiao, Jun School of Computer Science and Engineering Engineering::Computer science and engineering Contrastive Learning Counterfactual Thinking Today's VQA models still tend to capture superficial linguistic correlations in the training set and fail to generalize to the test set with different QA distributions. To reduce these language biases, recent VQA works introduce an auxiliary question-only model to regularize the training of targeted VQA model, and achieve dominating performance on diagnostic benchmarks for out-of-distribution testing. However, due to the complex model design, ensemble-based methods are unable to equip themselves with two indispensable characteristics of an ideal VQA model: 1) Visual-explainable: The model should rely on the right visual regions when making decisions. 2) Question-sensitive: The model should be sensitive to the linguistic variations in questions. To this end, we propose a novel model-agnostic Counterfactual Samples Synthesizing and Training (CSST) strategy. After training with CSST, VQA models are forced to focus on all critical objects and words, which significantly improves both visual-explainable and question-sensitive abilities. Specifically, CSST is composed of two parts: Counterfactual Samples Synthesizing (CSS) and Counterfactual Samples Training (CST). CSS generates counterfactual samples by carefully masking critical objects in images or words in questions and assigning pseudo ground-truth answers. CST not only trains the VQA models with both complementary samples to predict respective ground-truth answers, but also urges the VQA models to further distinguish the original samples and superficially similar counterfactual ones. To facilitate the CST training, we propose two variants of supervised contrastive loss for VQA, and design an effective positive and negative sample selection mechanism based on CSS. Extensive experiments have shown the effectiveness of CSST. Particularly, by building on top of model LMH+SAR (Clark et al. 2019), (Si et al. 2021), we achieve record-breaking performance on all out-of-distribution benchmarks (e.g., VQA-CP v2, VQA-CP v1, and GQA-OOD). The work of Long Chen was supported by HKUST Special Support for Young Faculty under Grant F0927. This work was supported in part by the National Key Research and Development Project of China under Grant 2021ZD0110700, in part by the National Natural Science Foundation of China under Grants U19B2043 and 61976185, and in part by the Fundamental Research Funds for the Central Universities under Grant 226-2022-00051. 2023-11-09T04:11:18Z 2023-11-09T04:11:18Z 2023 Journal Article Chen, L., Zheng, Y., Niu, Y., Zhang, H. & Xiao, J. (2023). Counterfactual samples synthesizing and training for robust visual question answering. IEEE Transactions On Pattern Analysis and Machine Intelligence, 45(11), 13218-13234. https://dx.doi.org/10.1109/TPAMI.2023.3290012 0162-8828 https://hdl.handle.net/10356/171830 10.1109/TPAMI.2023.3290012 37368813 2-s2.0-85163504920 11 45 13218 13234 en IEEE Transactions on Pattern Analysis and Machine Intelligence © 2023 IEEE. All rights reserved.
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Engineering::Computer science and engineering
Contrastive Learning
Counterfactual Thinking
spellingShingle Engineering::Computer science and engineering
Contrastive Learning
Counterfactual Thinking
Chen, Long
Zheng, Yuhang
Niu, Yulei
Zhang, Hanwang
Xiao, Jun
Counterfactual samples synthesizing and training for robust visual question answering
description Today's VQA models still tend to capture superficial linguistic correlations in the training set and fail to generalize to the test set with different QA distributions. To reduce these language biases, recent VQA works introduce an auxiliary question-only model to regularize the training of targeted VQA model, and achieve dominating performance on diagnostic benchmarks for out-of-distribution testing. However, due to the complex model design, ensemble-based methods are unable to equip themselves with two indispensable characteristics of an ideal VQA model: 1) Visual-explainable: The model should rely on the right visual regions when making decisions. 2) Question-sensitive: The model should be sensitive to the linguistic variations in questions. To this end, we propose a novel model-agnostic Counterfactual Samples Synthesizing and Training (CSST) strategy. After training with CSST, VQA models are forced to focus on all critical objects and words, which significantly improves both visual-explainable and question-sensitive abilities. Specifically, CSST is composed of two parts: Counterfactual Samples Synthesizing (CSS) and Counterfactual Samples Training (CST). CSS generates counterfactual samples by carefully masking critical objects in images or words in questions and assigning pseudo ground-truth answers. CST not only trains the VQA models with both complementary samples to predict respective ground-truth answers, but also urges the VQA models to further distinguish the original samples and superficially similar counterfactual ones. To facilitate the CST training, we propose two variants of supervised contrastive loss for VQA, and design an effective positive and negative sample selection mechanism based on CSS. Extensive experiments have shown the effectiveness of CSST. Particularly, by building on top of model LMH+SAR (Clark et al. 2019), (Si et al. 2021), we achieve record-breaking performance on all out-of-distribution benchmarks (e.g., VQA-CP v2, VQA-CP v1, and GQA-OOD).
author2 School of Computer Science and Engineering
author_facet School of Computer Science and Engineering
Chen, Long
Zheng, Yuhang
Niu, Yulei
Zhang, Hanwang
Xiao, Jun
format Article
author Chen, Long
Zheng, Yuhang
Niu, Yulei
Zhang, Hanwang
Xiao, Jun
author_sort Chen, Long
title Counterfactual samples synthesizing and training for robust visual question answering
title_short Counterfactual samples synthesizing and training for robust visual question answering
title_full Counterfactual samples synthesizing and training for robust visual question answering
title_fullStr Counterfactual samples synthesizing and training for robust visual question answering
title_full_unstemmed Counterfactual samples synthesizing and training for robust visual question answering
title_sort counterfactual samples synthesizing and training for robust visual question answering
publishDate 2023
url https://hdl.handle.net/10356/171830
_version_ 1783955538925584384