NOAHQA: Numerical reasoning with interpretable graph question answering dataset

While diverse question answering (QA) datasets have been proposed and contributed significantly to the development of deep learning models for QA tasks, the existing datasets fall short in two aspects. First, we lack QA datasets covering complex questions that involve answers as well as the reasonin...

Full description

Saved in:
Bibliographic Details
Main Authors: ZHANG, Qiyuan, WANG, Lei, YU, Sicheng, WANG, Shuohang, WANG, Yang, JIANG, Jing, LIM, Ee-peng
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2021
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/7153
https://ink.library.smu.edu.sg/context/sis_research/article/8156/viewcontent/2109.10604.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
id sg-smu-ink.sis_research-8156
record_format dspace
spelling sg-smu-ink.sis_research-81562022-04-29T04:16:44Z NOAHQA: Numerical reasoning with interpretable graph question answering dataset ZHANG, Qiyuan WANG, Lei YU, Sicheng WANG, Shuohang WANG, Yang JIANG, Jing LIM, Ee-peng While diverse question answering (QA) datasets have been proposed and contributed significantly to the development of deep learning models for QA tasks, the existing datasets fall short in two aspects. First, we lack QA datasets covering complex questions that involve answers as well as the reasoning processes to get the answers. As a result, the state-of-the-art QA research on numerical reasoning still focuses on simple calculations and does not provide the mathematical expressions or evidences justifying the answers. Second, the QA community has contributed much effort to improving the interpretability of QA models. However, these models fail to explicitly show the reasoning process, such as the evidence order for reasoning and the interactions between different pieces of evidence. To address the above shortcomings, we introduce NOAHQA, a conversational and bilingual QA dataset with questions requiring numerical reasoning with compound mathematical expressions. With NOAHQA, we develop an interpretable reasoning graph as well as the appropriate evaluation metric to measure the answer quality. We evaluate the state-of-the-art QA models trained using existing QA datasets on NOAHQA and show that the best among them can only achieve 55.5 exact match scores, while the human performance is 89.7. We also present a new QA model for generating a reasoning graph where the reasoning graph metric still has a large gap compared with that of humans, e.g., 28 scores. See https://github.com/Don-Joey/NoahQA 2021-10-01T07:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/7153 info:doi/10.48550/arXiv.2109.10604 https://ink.library.smu.edu.sg/context/sis_research/article/8156/viewcontent/2109.10604.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Databases and Information Systems Numerical Analysis and Scientific Computing
institution Singapore Management University
building SMU Libraries
continent Asia
country Singapore
Singapore
content_provider SMU Libraries
collection InK@SMU
language English
topic Databases and Information Systems
Numerical Analysis and Scientific Computing
spellingShingle Databases and Information Systems
Numerical Analysis and Scientific Computing
ZHANG, Qiyuan
WANG, Lei
YU, Sicheng
WANG, Shuohang
WANG, Yang
JIANG, Jing
LIM, Ee-peng
NOAHQA: Numerical reasoning with interpretable graph question answering dataset
description While diverse question answering (QA) datasets have been proposed and contributed significantly to the development of deep learning models for QA tasks, the existing datasets fall short in two aspects. First, we lack QA datasets covering complex questions that involve answers as well as the reasoning processes to get the answers. As a result, the state-of-the-art QA research on numerical reasoning still focuses on simple calculations and does not provide the mathematical expressions or evidences justifying the answers. Second, the QA community has contributed much effort to improving the interpretability of QA models. However, these models fail to explicitly show the reasoning process, such as the evidence order for reasoning and the interactions between different pieces of evidence. To address the above shortcomings, we introduce NOAHQA, a conversational and bilingual QA dataset with questions requiring numerical reasoning with compound mathematical expressions. With NOAHQA, we develop an interpretable reasoning graph as well as the appropriate evaluation metric to measure the answer quality. We evaluate the state-of-the-art QA models trained using existing QA datasets on NOAHQA and show that the best among them can only achieve 55.5 exact match scores, while the human performance is 89.7. We also present a new QA model for generating a reasoning graph where the reasoning graph metric still has a large gap compared with that of humans, e.g., 28 scores. See https://github.com/Don-Joey/NoahQA
format text
author ZHANG, Qiyuan
WANG, Lei
YU, Sicheng
WANG, Shuohang
WANG, Yang
JIANG, Jing
LIM, Ee-peng
author_facet ZHANG, Qiyuan
WANG, Lei
YU, Sicheng
WANG, Shuohang
WANG, Yang
JIANG, Jing
LIM, Ee-peng
author_sort ZHANG, Qiyuan
title NOAHQA: Numerical reasoning with interpretable graph question answering dataset
title_short NOAHQA: Numerical reasoning with interpretable graph question answering dataset
title_full NOAHQA: Numerical reasoning with interpretable graph question answering dataset
title_fullStr NOAHQA: Numerical reasoning with interpretable graph question answering dataset
title_full_unstemmed NOAHQA: Numerical reasoning with interpretable graph question answering dataset
title_sort noahqa: numerical reasoning with interpretable graph question answering dataset
publisher Institutional Knowledge at Singapore Management University
publishDate 2021
url https://ink.library.smu.edu.sg/sis_research/7153
https://ink.library.smu.edu.sg/context/sis_research/article/8156/viewcontent/2109.10604.pdf
_version_ 1770576233088679936