Automatic grading of short answers using Large Language Models in software engineering courses
Short-answer based questions have been used widely due to their effectiveness in assessing whether the desired learning outcomes have been attained by students. However, due to their open-ended nature, many different answers could be considered entirely or partially correct for the same question. In...
Saved in:
Main Authors: | , |
---|---|
Format: | text |
Language: | English |
Published: |
Institutional Knowledge at Singapore Management University
2024
|
Subjects: | |
Online Access: | https://ink.library.smu.edu.sg/sis_research/9267 https://ink.library.smu.edu.sg/context/sis_research/article/10267/viewcontent/Automatic_Grading_Educon_2024_final__1_.pdf |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Singapore Management University |
Language: | English |
id |
sg-smu-ink.sis_research-10267 |
---|---|
record_format |
dspace |
spelling |
sg-smu-ink.sis_research-102672024-09-05T06:07:59Z Automatic grading of short answers using Large Language Models in software engineering courses TA, Nguyen Binh Duong CHAI, Yi Meng Short-answer based questions have been used widely due to their effectiveness in assessing whether the desired learning outcomes have been attained by students. However, due to their open-ended nature, many different answers could be considered entirely or partially correct for the same question. In the context of computer science and software engineering courses where the enrolment has been increasing recently, manual grading of short-answer questions is a time-consuming and tedious process for instructors. In software engineering courses, assessments concern not just coding but many other aspects of software development such as system analysis, architecture design, software processes and operation methodologies such as Agile and DevOps. However, existing work in automatic grading/scoring of text-based answers in computing courses have been focusing more on coding-oriented questions. In this work, we consider the problem of autograding a broader range of short answers in software engineering courses. We propose an automated grading system incorporating both text embedding and completion approaches based on recently introduced pre-trained large language models (LLMs) such as GPT-3.5/4. We design and implement a web-based system so that students and instructors can easily leverage autograding for learning and teaching. Finally, we conduct an extensive evaluation of our automated grading approaches. We use a popular public dataset in the computing education domain and a new software engineering dataset of our own. The results demonstrate the effectiveness of our approach, and provide useful insights for further research in this area of AI-enabled education. 2024-05-01T07:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/9267 info:doi/10.1109/EDUCON60312.2024.10578839 https://ink.library.smu.edu.sg/context/sis_research/article/10267/viewcontent/Automatic_Grading_Educon_2024_final__1_.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University automatic grading embedding large language models short answers software engineering courses Educational Assessment, Evaluation, and Research Higher Education Software Engineering |
institution |
Singapore Management University |
building |
SMU Libraries |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
SMU Libraries |
collection |
InK@SMU |
language |
English |
topic |
automatic grading embedding large language models short answers software engineering courses Educational Assessment, Evaluation, and Research Higher Education Software Engineering |
spellingShingle |
automatic grading embedding large language models short answers software engineering courses Educational Assessment, Evaluation, and Research Higher Education Software Engineering TA, Nguyen Binh Duong CHAI, Yi Meng Automatic grading of short answers using Large Language Models in software engineering courses |
description |
Short-answer based questions have been used widely due to their effectiveness in assessing whether the desired learning outcomes have been attained by students. However, due to their open-ended nature, many different answers could be considered entirely or partially correct for the same question. In the context of computer science and software engineering courses where the enrolment has been increasing recently, manual grading of short-answer questions is a time-consuming and tedious process for instructors. In software engineering courses, assessments concern not just coding but many other aspects of software development such as system analysis, architecture design, software processes and operation methodologies such as Agile and DevOps. However, existing work in automatic grading/scoring of text-based answers in computing courses have been focusing more on coding-oriented questions. In this work, we consider the problem of autograding a broader range of short answers in software engineering courses. We propose an automated grading system incorporating both text embedding and completion approaches based on recently introduced pre-trained large language models (LLMs) such as GPT-3.5/4. We design and implement a web-based system so that students and instructors can easily leverage autograding for learning and teaching. Finally, we conduct an extensive evaluation of our automated grading approaches. We use a popular public dataset in the computing education domain and a new software engineering dataset of our own. The results demonstrate the effectiveness of our approach, and provide useful insights for further research in this area of AI-enabled education. |
format |
text |
author |
TA, Nguyen Binh Duong CHAI, Yi Meng |
author_facet |
TA, Nguyen Binh Duong CHAI, Yi Meng |
author_sort |
TA, Nguyen Binh Duong |
title |
Automatic grading of short answers using Large Language Models in software engineering courses |
title_short |
Automatic grading of short answers using Large Language Models in software engineering courses |
title_full |
Automatic grading of short answers using Large Language Models in software engineering courses |
title_fullStr |
Automatic grading of short answers using Large Language Models in software engineering courses |
title_full_unstemmed |
Automatic grading of short answers using Large Language Models in software engineering courses |
title_sort |
automatic grading of short answers using large language models in software engineering courses |
publisher |
Institutional Knowledge at Singapore Management University |
publishDate |
2024 |
url |
https://ink.library.smu.edu.sg/sis_research/9267 https://ink.library.smu.edu.sg/context/sis_research/article/10267/viewcontent/Automatic_Grading_Educon_2024_final__1_.pdf |
_version_ |
1814047849522921472 |