Automatic grading of short answers using Large Language Models in software engineering courses

Short-answer based questions have been used widely due to their effectiveness in assessing whether the desired learning outcomes have been attained by students. However, due to their open-ended nature, many different answers could be considered entirely or partially correct for the same question. In...

Full description

Saved in:
Bibliographic Details
Main Authors: TA, Nguyen Binh Duong, CHAI, Yi Meng
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2024
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/9267
https://ink.library.smu.edu.sg/context/sis_research/article/10267/viewcontent/Automatic_Grading_Educon_2024_final__1_.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
Description
Summary:Short-answer based questions have been used widely due to their effectiveness in assessing whether the desired learning outcomes have been attained by students. However, due to their open-ended nature, many different answers could be considered entirely or partially correct for the same question. In the context of computer science and software engineering courses where the enrolment has been increasing recently, manual grading of short-answer questions is a time-consuming and tedious process for instructors. In software engineering courses, assessments concern not just coding but many other aspects of software development such as system analysis, architecture design, software processes and operation methodologies such as Agile and DevOps. However, existing work in automatic grading/scoring of text-based answers in computing courses have been focusing more on coding-oriented questions. In this work, we consider the problem of autograding a broader range of short answers in software engineering courses. We propose an automated grading system incorporating both text embedding and completion approaches based on recently introduced pre-trained large language models (LLMs) such as GPT-3.5/4. We design and implement a web-based system so that students and instructors can easily leverage autograding for learning and teaching. Finally, we conduct an extensive evaluation of our automated grading approaches. We use a popular public dataset in the computing education domain and a new software engineering dataset of our own. The results demonstrate the effectiveness of our approach, and provide useful insights for further research in this area of AI-enabled education.