Evidence aggregation for answer re-ranking in open-domain question answering

A popular recent approach to answering open-domain questions is to first search for question-related passages and then apply reading comprehension models to extract answers. Existing methods usually extract answers from single passages independently. But some questions require a combination of evide...

Full description

Saved in:
Bibliographic Details
Main Authors: WANG, Shuohang, YU, Mo, JIANG, Jing, ZHANG, Wei, GUO, Xiaoxiao, CHANG, Shiyu, WANG, Zhiguo, KLINGER, Tim, TESAURO, Gerald, CAMPBELL, Murray
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2018
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/4238
https://ink.library.smu.edu.sg/context/sis_research/article/5241/viewcontent/171105116.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
id sg-smu-ink.sis_research-5241
record_format dspace
spelling sg-smu-ink.sis_research-52412019-01-17T06:03:51Z Evidence aggregation for answer re-ranking in open-domain question answering WANG, Shuohang YU, Mo JIANG, Jing ZHANG, Wei GUO, Xiaoxiao CHANG, Shiyu WANG, Zhiguo KLINGER, Tim TESAURO, Gerald CAMPBELL, Murray A popular recent approach to answering open-domain questions is to first search for question-related passages and then apply reading comprehension models to extract answers. Existing methods usually extract answers from single passages independently. But some questions require a combination of evidence from across different sources to answer correctly. In this paper, we propose two models which make use of multiple passages to generate their answers. Both use an answer-reranking approach which reorders the answer candidates generated by an existing state-of-the-art QA model. We propose two methods, namely, strength-based re-ranking and coverage-based re-ranking, to make use of the aggregated evidence from different passages to better determine the answer. Our models have achieved state-of-the-art results on three public open-domain QA datasets: Quasar-T, SearchQA and the open-domain version of TriviaQA, with about 8 percentage points of improvement over the former two datasets. 2018-05-01T07:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/4238 https://ink.library.smu.edu.sg/context/sis_research/article/5241/viewcontent/171105116.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Databases and Information Systems
institution Singapore Management University
building SMU Libraries
continent Asia
country Singapore
Singapore
content_provider SMU Libraries
collection InK@SMU
language English
topic Databases and Information Systems
spellingShingle Databases and Information Systems
WANG, Shuohang
YU, Mo
JIANG, Jing
ZHANG, Wei
GUO, Xiaoxiao
CHANG, Shiyu
WANG, Zhiguo
KLINGER, Tim
TESAURO, Gerald
CAMPBELL, Murray
Evidence aggregation for answer re-ranking in open-domain question answering
description A popular recent approach to answering open-domain questions is to first search for question-related passages and then apply reading comprehension models to extract answers. Existing methods usually extract answers from single passages independently. But some questions require a combination of evidence from across different sources to answer correctly. In this paper, we propose two models which make use of multiple passages to generate their answers. Both use an answer-reranking approach which reorders the answer candidates generated by an existing state-of-the-art QA model. We propose two methods, namely, strength-based re-ranking and coverage-based re-ranking, to make use of the aggregated evidence from different passages to better determine the answer. Our models have achieved state-of-the-art results on three public open-domain QA datasets: Quasar-T, SearchQA and the open-domain version of TriviaQA, with about 8 percentage points of improvement over the former two datasets.
format text
author WANG, Shuohang
YU, Mo
JIANG, Jing
ZHANG, Wei
GUO, Xiaoxiao
CHANG, Shiyu
WANG, Zhiguo
KLINGER, Tim
TESAURO, Gerald
CAMPBELL, Murray
author_facet WANG, Shuohang
YU, Mo
JIANG, Jing
ZHANG, Wei
GUO, Xiaoxiao
CHANG, Shiyu
WANG, Zhiguo
KLINGER, Tim
TESAURO, Gerald
CAMPBELL, Murray
author_sort WANG, Shuohang
title Evidence aggregation for answer re-ranking in open-domain question answering
title_short Evidence aggregation for answer re-ranking in open-domain question answering
title_full Evidence aggregation for answer re-ranking in open-domain question answering
title_fullStr Evidence aggregation for answer re-ranking in open-domain question answering
title_full_unstemmed Evidence aggregation for answer re-ranking in open-domain question answering
title_sort evidence aggregation for answer re-ranking in open-domain question answering
publisher Institutional Knowledge at Singapore Management University
publishDate 2018
url https://ink.library.smu.edu.sg/sis_research/4238
https://ink.library.smu.edu.sg/context/sis_research/article/5241/viewcontent/171105116.pdf
_version_ 1770574497935523840