AUTOMATED SHORT ANSWER SCORING USING SEMANTIC SIMILARITY BASED ON WORD EMBEDDINGS
Short-anwertests are one of the ways used to assess students which is an important part of learning. It is considered capable of measuring students' abilities that are complex such as the ability to arrange ideas and arguments related to the questions asked. However, the short-anwer appraisal p...
Saved in:
Main Author: | |
---|---|
Format: | Theses |
Language: | Indonesia |
Online Access: | https://digilib.itb.ac.id/gdl/view/42747 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Institut Teknologi Bandung |
Language: | Indonesia |
id |
id-itb.:42747 |
---|---|
spelling |
id-itb.:427472019-09-23T14:37:16ZAUTOMATED SHORT ANSWER SCORING USING SEMANTIC SIMILARITY BASED ON WORD EMBEDDINGS Mutaqin Indonesia Theses automated scoring,short-answer,semantic similarity, word-embedding INSTITUT TEKNOLOGI BANDUNG https://digilib.itb.ac.id/gdl/view/42747 Short-anwertests are one of the ways used to assess students which is an important part of learning. It is considered capable of measuring students' abilities that are complex such as the ability to arrange ideas and arguments related to the questions asked. However, the short-anwer appraisal process has a disadvantage that is it requires a long time to assess and there is subjectivity between assessors so that it is possible to have different assessment results. Therefore, automatedshort-answer scoring is needed to help the assessment process to be faster and more objective. Research related to automated short-answer scoring continues to evolve. The main problem raised in the research related to automated essay answer scoring is how to improve the accuracy of the assessment so that it approaches the results of human scoring. The process of evaluating essay answers is basically comparing student answers with the correct answer reference. Student answers are considered correct if they have similarities with reference answer semantically and there are no opposite meanings. This study proposes improving the accuracy of short-anwer scoring by utilizing word-embedding semantic similarity measurements and syntactic analysis. Utilization of the word-embedding model is used to anticipate the diversity of students' answers because the model is able to map words that are adjacent to meanings. Meanwhile, syntactic analysis is used to detect meaning of opposing sentences by utilizing part-of speech tags and dependency relationships in sentences. The results show a good correlation between automatic essay answer evaluation with human assessment manually with a correlation coefficient of 0.7085. The level of accuracy of the automatic essay answer assessment as measured by the average absolute error also shows better results than previous studies. text |
institution |
Institut Teknologi Bandung |
building |
Institut Teknologi Bandung Library |
continent |
Asia |
country |
Indonesia Indonesia |
content_provider |
Institut Teknologi Bandung |
collection |
Digital ITB |
language |
Indonesia |
description |
Short-anwertests are one of the ways used to assess students which is an important part of learning. It is considered capable of measuring students' abilities that are complex such as the ability to arrange ideas and arguments related to the questions asked. However, the short-anwer appraisal process has a disadvantage that is it requires a long time to assess and there is subjectivity between assessors so that it is possible to have different assessment results. Therefore, automatedshort-answer scoring is needed to help the assessment process to be faster and more objective.
Research related to automated short-answer scoring continues to evolve. The main problem raised in the research related to automated essay answer scoring is how to improve the accuracy of the assessment so that it approaches the results of human scoring. The process of evaluating essay answers is basically comparing student answers with the correct answer reference. Student answers are considered correct if they have similarities with reference answer semantically and there are no opposite meanings. This study proposes improving the accuracy of short-anwer scoring by utilizing word-embedding semantic similarity measurements and syntactic analysis. Utilization of the word-embedding model is used to anticipate the diversity of students' answers because the model is able to map words that are adjacent to meanings. Meanwhile, syntactic analysis is used to detect meaning of opposing sentences by utilizing part-of speech tags and dependency relationships in sentences.
The results show a good correlation between automatic essay answer evaluation with human assessment manually with a correlation coefficient of 0.7085. The level of accuracy of the automatic essay answer assessment as measured by the average absolute error also shows better results than previous studies. |
format |
Theses |
author |
Mutaqin |
spellingShingle |
Mutaqin AUTOMATED SHORT ANSWER SCORING USING SEMANTIC SIMILARITY BASED ON WORD EMBEDDINGS |
author_facet |
Mutaqin |
author_sort |
Mutaqin |
title |
AUTOMATED SHORT ANSWER SCORING USING SEMANTIC SIMILARITY BASED ON WORD EMBEDDINGS |
title_short |
AUTOMATED SHORT ANSWER SCORING USING SEMANTIC SIMILARITY BASED ON WORD EMBEDDINGS |
title_full |
AUTOMATED SHORT ANSWER SCORING USING SEMANTIC SIMILARITY BASED ON WORD EMBEDDINGS |
title_fullStr |
AUTOMATED SHORT ANSWER SCORING USING SEMANTIC SIMILARITY BASED ON WORD EMBEDDINGS |
title_full_unstemmed |
AUTOMATED SHORT ANSWER SCORING USING SEMANTIC SIMILARITY BASED ON WORD EMBEDDINGS |
title_sort |
automated short answer scoring using semantic similarity based on word embeddings |
url |
https://digilib.itb.ac.id/gdl/view/42747 |
_version_ |
1821998687625150464 |