Marker-less gesture and facial expression base affect modeling
Many Affective Intelligent Tutoring Systems (ITSs) today use multi-modal approach. These approaches have had promising results but it is difficult to deploy and limited because they require special equipment which are expensive and they detect limited gestures. In order that ITSs be deployable, easy...
Saved in:
Main Authors: | , , , |
---|---|
Format: | text |
Language: | English |
Published: |
Animo Repository
2012
|
Subjects: | |
Online Access: | https://animorepository.dlsu.edu.ph/etd_bachelors/11130 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | De La Salle University |
Language: | English |
Summary: | Many Affective Intelligent Tutoring Systems (ITSs) today use multi-modal approach. These approaches have had promising results but it is difficult to deploy and limited because they require special equipment which are expensive and they detect limited gestures. In order that ITSs be deployable, easy to duplicate, and more accurate, this has to be inexpensive and detect more features. This research aims to solve this problem by understanding affect through the computer camera and Microsoft Kinect.
Eight students were recorded for a minimum for a minimum of 45 minutes. The students were then asked to annotate the data. Both the raw data produced by Kinect and extracted features data sets were used to build user-specific models. SVM Poly Kernel, SVM PUK, SVM RBF, Logit Boost, and Multiplayer Perception were used to build models for the raw data produced by Kinect since it is numerical. C4.5 was used to build the model for the extracted features because the data for this is binary. The range of result produced for face is 34.65% to 99.54%. The range of result produced for body gestures is 75.38% to 100%. The range of result produced for body gestures using extracted features is 44.11% to 75.37%. The range of F-measure produced for fusion of gesture (raw data) and face is 0.147 to 0.34. The range of F-measure produced for fusion of gesture (extracted features) and face is 0 017 to 0.342. |
---|