Sequence to sequence model performance for education chatbot

Chatbot for education has great potential to complement human educators and education administrators. For example, it can be around the clock tutor to answer and clarify any questions from students who may have missed class. A chatbot can be implemented either by ruled based or artificial intelligen...

Full description

Saved in:
Bibliographic Details
Main Authors: Palasundram, Kulothunkan, Nasharuddin, Nurul Amelina, Azman, Azreen, Mohd Sharef, Nurfadhlina, Kasmiran, Khairul Azhar
Format: Article
Language:English
Published: Kassel University Press 2019
Online Access:http://psasir.upm.edu.my/id/eprint/82096/1/Sequence%20to%20sequence%20model%20performance%20for%20education%20chatbot.pdf
http://psasir.upm.edu.my/id/eprint/82096/
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Universiti Putra Malaysia
Language: English
Description
Summary:Chatbot for education has great potential to complement human educators and education administrators. For example, it can be around the clock tutor to answer and clarify any questions from students who may have missed class. A chatbot can be implemented either by ruled based or artificial intelligence based. However, unlike the ruled-based chatbots, artificial intelligence based chatbots can learn and become smarter overtime and is more scalable and has become the popular choice for chatbot researchers recently. Recurrent Neural Network based Sequence-to-sequence (Seq2Seq) model is one of the most commonly researched model to implement artificial intelligence chatbot and has shown great progress since its introduction in 2014. However, it is still in infancy and has not been applied widely in educational chatbot development. Introduced originally for neural machine translation, the Seq2Seq model has been adapted for conversation modelling including question-answering chatbots. However, in-depth research and analysis of optimal settings of the various components of Seq2Seq model for natural answer generation problem is very limited. Additionally, there has been no experiments and analysis conducted to understand how Seq2Seq model handles variations is questions posed to it to generate correct answers. Our experiments add to the empirical evaluations on Seq2Seq literature and provides insights to these questions. Additionally, we provide insights on how a curated dataset can be developed and questions designed to train and test the performance of a Seq2Seq based question-answer model.