NumGPT: Improving numeracy ability of generative pre-trained models

Existing generative pre-trained language models (e.g., GPT) focus on modeling the language structure and semantics of general texts. However, those models do not consider the numerical properties of numbers and cannot perform robustly on numerical reasoning tasks (e.g., math word problems and measur...

Full description

Saved in:
Bibliographic Details
Main Authors: JIN, Zhihua, JIANG, Xin, WANG, Xiangbo, LIU, Qun, WANG, Yong, REN, Xiaozhe, QU, Huamin
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2023
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/8599
https://ink.library.smu.edu.sg/context/sis_research/article/9602/viewcontent/2109.03137.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
id sg-smu-ink.sis_research-9602
record_format dspace
spelling sg-smu-ink.sis_research-96022024-01-25T08:30:55Z NumGPT: Improving numeracy ability of generative pre-trained models JIN, Zhihua JIANG, Xin WANG, Xiangbo LIU, Qun WANG, Yong REN, Xiaozhe QU, Huamin Existing generative pre-trained language models (e.g., GPT) focus on modeling the language structure and semantics of general texts. However, those models do not consider the numerical properties of numbers and cannot perform robustly on numerical reasoning tasks (e.g., math word problems and measurement estimation). In this paper, we propose NumGPT, a generative pre-trained model that explicitly models the numerical properties of numbers in texts. Specifically, it leverages a prototype-based numeral embedding to encode the mantissa of the number and an individual embedding to encode the exponent of the number. A numeral-aware loss function is designed to integrate numerals into the pre-training objective of NumGPT. We conduct extensive experiments on four different datasets to evaluate the numeracy ability of NumGPT. The experiment results show that NumGPT outperforms baseline models (e.g., GPT and GPT with DICE) on a range of numerical reasoning tasks such as measurement estimation, number comparison, math word problems, and magnitude classification. Ablation studies are also conducted to evaluate the impact of pre-training and model hyperparameters on the performance. 2023-08-01T07:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/8599 info:doi/10.48550/arXiv.2109.03137 https://ink.library.smu.edu.sg/context/sis_research/article/9602/viewcontent/2109.03137.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Artificial Intelligence and Robotics
institution Singapore Management University
building SMU Libraries
continent Asia
country Singapore
Singapore
content_provider SMU Libraries
collection InK@SMU
language English
topic Artificial Intelligence and Robotics
spellingShingle Artificial Intelligence and Robotics
JIN, Zhihua
JIANG, Xin
WANG, Xiangbo
LIU, Qun
WANG, Yong
REN, Xiaozhe
QU, Huamin
NumGPT: Improving numeracy ability of generative pre-trained models
description Existing generative pre-trained language models (e.g., GPT) focus on modeling the language structure and semantics of general texts. However, those models do not consider the numerical properties of numbers and cannot perform robustly on numerical reasoning tasks (e.g., math word problems and measurement estimation). In this paper, we propose NumGPT, a generative pre-trained model that explicitly models the numerical properties of numbers in texts. Specifically, it leverages a prototype-based numeral embedding to encode the mantissa of the number and an individual embedding to encode the exponent of the number. A numeral-aware loss function is designed to integrate numerals into the pre-training objective of NumGPT. We conduct extensive experiments on four different datasets to evaluate the numeracy ability of NumGPT. The experiment results show that NumGPT outperforms baseline models (e.g., GPT and GPT with DICE) on a range of numerical reasoning tasks such as measurement estimation, number comparison, math word problems, and magnitude classification. Ablation studies are also conducted to evaluate the impact of pre-training and model hyperparameters on the performance.
format text
author JIN, Zhihua
JIANG, Xin
WANG, Xiangbo
LIU, Qun
WANG, Yong
REN, Xiaozhe
QU, Huamin
author_facet JIN, Zhihua
JIANG, Xin
WANG, Xiangbo
LIU, Qun
WANG, Yong
REN, Xiaozhe
QU, Huamin
author_sort JIN, Zhihua
title NumGPT: Improving numeracy ability of generative pre-trained models
title_short NumGPT: Improving numeracy ability of generative pre-trained models
title_full NumGPT: Improving numeracy ability of generative pre-trained models
title_fullStr NumGPT: Improving numeracy ability of generative pre-trained models
title_full_unstemmed NumGPT: Improving numeracy ability of generative pre-trained models
title_sort numgpt: improving numeracy ability of generative pre-trained models
publisher Institutional Knowledge at Singapore Management University
publishDate 2023
url https://ink.library.smu.edu.sg/sis_research/8599
https://ink.library.smu.edu.sg/context/sis_research/article/9602/viewcontent/2109.03137.pdf
_version_ 1789483284411973632