Aspect-based API review classification: How far can pre-trained transformer model go?
APIs (Application Programming Interfaces) are reusable software libraries and are building blocks for modern rapid software development. Previous research shows that programmers frequently share and search for reviews of APIs on the mainstream software question and answer (Q&A) platforms like St...
Saved in:
Main Authors: | , , , , , , |
---|---|
Format: | text |
Language: | English |
Published: |
Institutional Knowledge at Singapore Management University
2022
|
Subjects: | |
Online Access: | https://ink.library.smu.edu.sg/sis_research/7697 https://ink.library.smu.edu.sg/context/sis_research/article/8700/viewcontent/aspect.pdf |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Singapore Management University |
Language: | English |
id |
sg-smu-ink.sis_research-8700 |
---|---|
record_format |
dspace |
spelling |
sg-smu-ink.sis_research-87002024-05-31T07:34:24Z Aspect-based API review classification: How far can pre-trained transformer model go? YANG, Chengran XU, Bowen KHAN, Junaed Younus UDDIN, Gias HAN, DongGyun YANG, Zhou LO, David APIs (Application Programming Interfaces) are reusable software libraries and are building blocks for modern rapid software development. Previous research shows that programmers frequently share and search for reviews of APIs on the mainstream software question and answer (Q&A) platforms like Stack Overflow, which motivates researchers to design tasks and approaches related to process API reviews automatically. Among these tasks, classifying API reviews into different aspects (e.g., performance or security), which is called the aspect-based API review classification, is of great importance. The current state-of-the-art (SOTA) solution to this task is based on the traditional machine learning algorithm. Inspired by the great success achieved by pre-trained models on many software engineering tasks, this study fine-tunes six pre-trained models for the aspect-based API review classification task and compares them with the current SOTA solution on an API review benchmark collected by Uddin et al. The investigated models include four models (BERT, RoBERTa, ALBERT and XLNet) that are pretrained on natural languages, BERTOverflow that is pre-trained on text corpus extracted from posts on Stack Overflow, and CosSensBERT that is designed for handling imbalanced data. The results show that all the six fine-tuned models outperform the traditional machine learning-based tool. More specifically, the improvement on the F1-score ranges from 21.0% to 30.2%. We also find that BERTOverflow, a model pre-trained on the corpus from Stack Overflow, does not show better performance than BERT. The result also suggests that CosSensBERT also does not exhibit better performance than BERT in terms of F1, but it is still worthy of being considered as it achieves better performance on MCC and AUC. 2022-03-01T08:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/7697 info:doi/10.1109/SANER53432.2022.00054 https://ink.library.smu.edu.sg/context/sis_research/article/8700/viewcontent/aspect.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Software mining Natural language processing Multi-label classification Pre-trained models Databases and Information Systems Software Engineering |
institution |
Singapore Management University |
building |
SMU Libraries |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
SMU Libraries |
collection |
InK@SMU |
language |
English |
topic |
Software mining Natural language processing Multi-label classification Pre-trained models Databases and Information Systems Software Engineering |
spellingShingle |
Software mining Natural language processing Multi-label classification Pre-trained models Databases and Information Systems Software Engineering YANG, Chengran XU, Bowen KHAN, Junaed Younus UDDIN, Gias HAN, DongGyun YANG, Zhou LO, David Aspect-based API review classification: How far can pre-trained transformer model go? |
description |
APIs (Application Programming Interfaces) are reusable software libraries and are building blocks for modern rapid software development. Previous research shows that programmers frequently share and search for reviews of APIs on the mainstream software question and answer (Q&A) platforms like Stack Overflow, which motivates researchers to design tasks and approaches related to process API reviews automatically. Among these tasks, classifying API reviews into different aspects (e.g., performance or security), which is called the aspect-based API review classification, is of great importance. The current state-of-the-art (SOTA) solution to this task is based on the traditional machine learning algorithm. Inspired by the great success achieved by pre-trained models on many software engineering tasks, this study fine-tunes six pre-trained models for the aspect-based API review classification task and compares them with the current SOTA solution on an API review benchmark collected by Uddin et al. The investigated models include four models (BERT, RoBERTa, ALBERT and XLNet) that are pretrained on natural languages, BERTOverflow that is pre-trained on text corpus extracted from posts on Stack Overflow, and CosSensBERT that is designed for handling imbalanced data. The results show that all the six fine-tuned models outperform the traditional machine learning-based tool. More specifically, the improvement on the F1-score ranges from 21.0% to 30.2%. We also find that BERTOverflow, a model pre-trained on the corpus from Stack Overflow, does not show better performance than BERT. The result also suggests that CosSensBERT also does not exhibit better performance than BERT in terms of F1, but it is still worthy of being considered as it achieves better performance on MCC and AUC. |
format |
text |
author |
YANG, Chengran XU, Bowen KHAN, Junaed Younus UDDIN, Gias HAN, DongGyun YANG, Zhou LO, David |
author_facet |
YANG, Chengran XU, Bowen KHAN, Junaed Younus UDDIN, Gias HAN, DongGyun YANG, Zhou LO, David |
author_sort |
YANG, Chengran |
title |
Aspect-based API review classification: How far can pre-trained transformer model go? |
title_short |
Aspect-based API review classification: How far can pre-trained transformer model go? |
title_full |
Aspect-based API review classification: How far can pre-trained transformer model go? |
title_fullStr |
Aspect-based API review classification: How far can pre-trained transformer model go? |
title_full_unstemmed |
Aspect-based API review classification: How far can pre-trained transformer model go? |
title_sort |
aspect-based api review classification: how far can pre-trained transformer model go? |
publisher |
Institutional Knowledge at Singapore Management University |
publishDate |
2022 |
url |
https://ink.library.smu.edu.sg/sis_research/7697 https://ink.library.smu.edu.sg/context/sis_research/article/8700/viewcontent/aspect.pdf |
_version_ |
1814047562956537856 |