Evaluating human versus machine learning performance in classifying research abstracts
We study whether humans or machine learning (ML) classification models are better at classifying scientific research abstracts according to a fixed set of discipline groups. We recruit both undergraduate and postgraduate assistants for this task in separate stages, and compare their performance agai...
Saved in:
Main Authors: | , , , , |
---|---|
Format: | text |
Language: | English |
Published: |
Institutional Knowledge at Singapore Management University
2020
|
Subjects: | |
Online Access: | https://ink.library.smu.edu.sg/soe_research/2446 https://ink.library.smu.edu.sg/context/soe_research/article/3445/viewcontent/Goh2020_Article_EvaluatingHumanVersusMachineLe.pdf |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Singapore Management University |
Language: | English |
id |
sg-smu-ink.soe_research-3445 |
---|---|
record_format |
dspace |
spelling |
sg-smu-ink.soe_research-34452023-10-18T09:21:49Z Evaluating human versus machine learning performance in classifying research abstracts GOH, Yeow Chong CAI, Xin Qing THESEIRA, Walter KO, Giovanni KHOR, Khiam Aik We study whether humans or machine learning (ML) classification models are better at classifying scientific research abstracts according to a fixed set of discipline groups. We recruit both undergraduate and postgraduate assistants for this task in separate stages, and compare their performance against the support vectors machine ML algorithm at classifying European Research Council Starting Grant project abstracts to their actual evaluation panels, which are organised by discipline groups. On average, ML is more accurate than human classifiers, across a variety of training and test datasets, and across evaluation panels. ML classifiers trained on different training sets are also more reliable than human classifiers, meaning that different ML classifiers are more consistent in assigning the same classifications to any given abstract, compared to different human classifiers. While the top five percentile of human classifiers can outperform ML in limited cases, selection and training of such classifiers is likely costly and difficult compared to training ML models. Our results suggest ML models are a cost effective and highly accurate method for addressing problems in comparative bibliometric analysis, such as harmonising the discipline classifications of research from different funding agencies or countries. 2020-07-01T07:00:00Z text application/pdf https://ink.library.smu.edu.sg/soe_research/2446 info:doi/10.1007/s11192-020-03614-2 https://ink.library.smu.edu.sg/context/soe_research/article/3445/viewcontent/Goh2020_Article_EvaluatingHumanVersusMachineLe.pdf http://creativecommons.org/licenses/by/4.0/ Research Collection School Of Economics eng Institutional Knowledge at Singapore Management University Discipline classification Text classification Supervised classification Artificial Intelligence and Robotics Economics |
institution |
Singapore Management University |
building |
SMU Libraries |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
SMU Libraries |
collection |
InK@SMU |
language |
English |
topic |
Discipline classification Text classification Supervised classification Artificial Intelligence and Robotics Economics |
spellingShingle |
Discipline classification Text classification Supervised classification Artificial Intelligence and Robotics Economics GOH, Yeow Chong CAI, Xin Qing THESEIRA, Walter KO, Giovanni KHOR, Khiam Aik Evaluating human versus machine learning performance in classifying research abstracts |
description |
We study whether humans or machine learning (ML) classification models are better at classifying scientific research abstracts according to a fixed set of discipline groups. We recruit both undergraduate and postgraduate assistants for this task in separate stages, and compare their performance against the support vectors machine ML algorithm at classifying European Research Council Starting Grant project abstracts to their actual evaluation panels, which are organised by discipline groups. On average, ML is more accurate than human classifiers, across a variety of training and test datasets, and across evaluation panels. ML classifiers trained on different training sets are also more reliable than human classifiers, meaning that different ML classifiers are more consistent in assigning the same classifications to any given abstract, compared to different human classifiers. While the top five percentile of human classifiers can outperform ML in limited cases, selection and training of such classifiers is likely costly and difficult compared to training ML models. Our results suggest ML models are a cost effective and highly accurate method for addressing problems in comparative bibliometric analysis, such as harmonising the discipline classifications of research from different funding agencies or countries. |
format |
text |
author |
GOH, Yeow Chong CAI, Xin Qing THESEIRA, Walter KO, Giovanni KHOR, Khiam Aik |
author_facet |
GOH, Yeow Chong CAI, Xin Qing THESEIRA, Walter KO, Giovanni KHOR, Khiam Aik |
author_sort |
GOH, Yeow Chong |
title |
Evaluating human versus machine learning performance in classifying research abstracts |
title_short |
Evaluating human versus machine learning performance in classifying research abstracts |
title_full |
Evaluating human versus machine learning performance in classifying research abstracts |
title_fullStr |
Evaluating human versus machine learning performance in classifying research abstracts |
title_full_unstemmed |
Evaluating human versus machine learning performance in classifying research abstracts |
title_sort |
evaluating human versus machine learning performance in classifying research abstracts |
publisher |
Institutional Knowledge at Singapore Management University |
publishDate |
2020 |
url |
https://ink.library.smu.edu.sg/soe_research/2446 https://ink.library.smu.edu.sg/context/soe_research/article/3445/viewcontent/Goh2020_Article_EvaluatingHumanVersusMachineLe.pdf |
_version_ |
1781793944411045888 |