Predicting affective states during e-learning : using deep neural networks
E-learning has recently taken over the conventional method of learning, i.e., classroom lectures. E-learning, being a one-way dialog, doesn’t give feedback to the teacher on how well they are doing. This area has been hardly studied where personalized and adaptive learning systems are implemented du...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Final Year Project |
Language: | English |
Published: |
2018
|
Subjects: | |
Online Access: | http://hdl.handle.net/10356/74086 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | E-learning has recently taken over the conventional method of learning, i.e., classroom lectures. E-learning, being a one-way dialog, doesn’t give feedback to the teacher on how well they are doing. This area has been hardly studied where personalized and adaptive learning systems are implemented during an e-lecture. To monitor the learner’s facial expressions, e-learning experiments have been conducted on participants and their webcam videos and self-reports are gathered. This is done in the context of four CE7412 lectures. The data is then pre-processed to make it suitable for training. The TVL/1 algorithm is applied to extract the optical flows and frames for each video. The neural networks are then trained using learner’s self-reports and his needs for Feedback and Slide Improvement. In this paper, temporal segment networks (two-stream ConvNets) are implemented to predict the learner’s affective states and their needs for Feedback and Slide Improvement. Personalized learning models trained using learner’s self-reports generalize unseen data and gain capacities for prediction of Affective States, Feedback and Slide Improvement. |
---|