Nonfactoid question answering as query-focused summarization with graph-enhanced multihop inference

Nonfactoid question answering (QA) is one of the most extensive yet challenging applications and research areas in natural language processing (NLP). Existing methods fall short of handling the long-distance and complex semantic relations between the question and the document sentences. In this work...

Full description

Saved in:
Bibliographic Details
Main Authors: DENG, Yang, ZHANG, Wenxuan, XU, Weiwen, SHEN, Ying, LAM, Wai
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2024
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/9089
https://ink.library.smu.edu.sg/context/sis_research/article/10092/viewcontent/bf180a72_6159_43d4_bf29_b6cae95a308a.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
Description
Summary:Nonfactoid question answering (QA) is one of the most extensive yet challenging applications and research areas in natural language processing (NLP). Existing methods fall short of handling the long-distance and complex semantic relations between the question and the document sentences. In this work, we propose a novel query-focused summarization method, namely a graph-enhanced multihop query-focused summarizer (GMQS), to tackle the nonfactoid QA problem. Specifically, we leverage graph-enhanced reasoning techniques to elaborate the multihop inference process in nonfactoid QA. Three types of graphs with different semantic relations, namely semantic relevance, topical coherence, and coreference linking, are constructed for explicitly capturing the question-document and sentence-sentence interrelationships. Relational graph attention network (RGAT) is then developed to aggregate the multirelational information accordingly. In addition, the proposed method can be adapted to both extractive and abstractive applications as well as be mutually enhanced by joint learning. Experimental results show that the proposed method consistently outperforms both existing extractive and abstractive methods on two nonfactoid QA datasets, WikiHow and PubMedQA, and possesses the capability of performing explainable multihop reasoning.