NumGPT: Improving numeracy ability of generative pre-trained models
Existing generative pre-trained language models (e.g., GPT) focus on modeling the language structure and semantics of general texts. However, those models do not consider the numerical properties of numbers and cannot perform robustly on numerical reasoning tasks (e.g., math word problems and measur...
Saved in:
Main Authors: | JIN, Zhihua, JIANG, Xin, WANG, Xiangbo, LIU, Qun, WANG, Yong, REN, Xiaozhe, QU, Huamin |
---|---|
Format: | text |
Language: | English |
Published: |
Institutional Knowledge at Singapore Management University
2023
|
Subjects: | |
Online Access: | https://ink.library.smu.edu.sg/sis_research/8599 https://ink.library.smu.edu.sg/context/sis_research/article/9602/viewcontent/2109.03137.pdf |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Singapore Management University |
Language: | English |
Similar Items
-
ChatGPT's impact
by: Lim, Donald Patrick L.
Published: (2023) -
Re-evaluating natural intelligence in the face of ChatGPT
by: LIM, Elvin T., et al.
Published: (2023) -
RecipeGPT: Generative pre-training based cooking recipe generation and evaluation system
by: LEE, Helena Huey Chong, et al.
Published: (2020) -
LLM4Vis: Explainable visualization recommendation using ChatGPT
by: WANG, Lei., et al.
Published: (2023) -
ROME: Evaluating pre-trained vision-language models on reasoning beyond visual common sense
by: ZHOU, Kankan, et al.
Published: (2023)