Context modeling with evidence filter for multiple choice question answering

Multiple-Choice Question Answering (MCQA) is one of the challenging tasks in machine reading comprehension. The main challenge in MCQA is to extract "evidence" from the given context that supports the correct answer. In OpenbookQA dataset [1], the requirement of extracting "evidence&q...

Full description

Saved in:
Bibliographic Details
Main Authors: YU, Sicheng, ZHANG, Hao, JING, Wei, JIANG, Jing
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2022
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/7615
https://ink.library.smu.edu.sg/context/sis_research/article/8618/viewcontent/2010.02649.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
Description
Summary:Multiple-Choice Question Answering (MCQA) is one of the challenging tasks in machine reading comprehension. The main challenge in MCQA is to extract "evidence" from the given context that supports the correct answer. In OpenbookQA dataset [1], the requirement of extracting "evidence" is particularly important due to the mutual independence of sentences in the context. Existing work tackles this problem by annotated evidence or distant supervision with rules which overly rely on human efforts. To address the challenge, we propose a simple yet effective approach termed evidence filtering to model the relationships between the encoded contexts with respect to different options collectively, and to potentially highlight the evidence sentences and filter out unrelated sentences. In addition to the effective reduction of human efforts of our approach compared, through extensive experiments on OpenbookQA, we show that the proposed approach outperforms the models that use the same backbone and more training data; and our parameter analysis also demonstrates the interpretability of our approach.