Marker-less gesture and facial expression base affect modeling

Many Affective Intelligent Tutoring Systems (ITSs) today use multi-modal approach. These approaches have had promising results but it is difficult to deploy and limited because they require special equipment which are expensive and they detect limited gestures. In order that ITSs be deployable, easy...

Full description

Saved in:
Bibliographic Details
Main Authors: Cantos, Sherlo Yvan, Miranda, Jeriah Kjell, Tiu, Melisa Renee., Yeung, Mary Czarinelle
Format: text
Language:English
Published: Animo Repository 2012
Subjects:
Online Access:https://animorepository.dlsu.edu.ph/etd_bachelors/11130
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: De La Salle University
Language: English
id oai:animorepository.dlsu.edu.ph:etd_bachelors-11775
record_format eprints
spelling oai:animorepository.dlsu.edu.ph:etd_bachelors-117752022-03-02T02:59:47Z Marker-less gesture and facial expression base affect modeling Cantos, Sherlo Yvan Miranda, Jeriah Kjell Tiu, Melisa Renee. Yeung, Mary Czarinelle Many Affective Intelligent Tutoring Systems (ITSs) today use multi-modal approach. These approaches have had promising results but it is difficult to deploy and limited because they require special equipment which are expensive and they detect limited gestures. In order that ITSs be deployable, easy to duplicate, and more accurate, this has to be inexpensive and detect more features. This research aims to solve this problem by understanding affect through the computer camera and Microsoft Kinect. Eight students were recorded for a minimum for a minimum of 45 minutes. The students were then asked to annotate the data. Both the raw data produced by Kinect and extracted features data sets were used to build user-specific models. SVM Poly Kernel, SVM PUK, SVM RBF, Logit Boost, and Multiplayer Perception were used to build models for the raw data produced by Kinect since it is numerical. C4.5 was used to build the model for the extracted features because the data for this is binary. The range of result produced for face is 34.65% to 99.54%. The range of result produced for body gestures is 75.38% to 100%. The range of result produced for body gestures using extracted features is 44.11% to 75.37%. The range of F-measure produced for fusion of gesture (raw data) and face is 0.147 to 0.34. The range of F-measure produced for fusion of gesture (extracted features) and face is 0 017 to 0.342. 2012-01-01T08:00:00Z text https://animorepository.dlsu.edu.ph/etd_bachelors/11130 Bachelor's Theses English Animo Repository Intelligent tutoring systems Education--Effect of technological innovations on Modality (Linguistics) Learning strategies Computer Sciences
institution De La Salle University
building De La Salle University Library
continent Asia
country Philippines
Philippines
content_provider De La Salle University Library
collection DLSU Institutional Repository
language English
topic Intelligent tutoring systems
Education--Effect of technological innovations on
Modality (Linguistics)
Learning strategies
Computer Sciences
spellingShingle Intelligent tutoring systems
Education--Effect of technological innovations on
Modality (Linguistics)
Learning strategies
Computer Sciences
Cantos, Sherlo Yvan
Miranda, Jeriah Kjell
Tiu, Melisa Renee.
Yeung, Mary Czarinelle
Marker-less gesture and facial expression base affect modeling
description Many Affective Intelligent Tutoring Systems (ITSs) today use multi-modal approach. These approaches have had promising results but it is difficult to deploy and limited because they require special equipment which are expensive and they detect limited gestures. In order that ITSs be deployable, easy to duplicate, and more accurate, this has to be inexpensive and detect more features. This research aims to solve this problem by understanding affect through the computer camera and Microsoft Kinect. Eight students were recorded for a minimum for a minimum of 45 minutes. The students were then asked to annotate the data. Both the raw data produced by Kinect and extracted features data sets were used to build user-specific models. SVM Poly Kernel, SVM PUK, SVM RBF, Logit Boost, and Multiplayer Perception were used to build models for the raw data produced by Kinect since it is numerical. C4.5 was used to build the model for the extracted features because the data for this is binary. The range of result produced for face is 34.65% to 99.54%. The range of result produced for body gestures is 75.38% to 100%. The range of result produced for body gestures using extracted features is 44.11% to 75.37%. The range of F-measure produced for fusion of gesture (raw data) and face is 0.147 to 0.34. The range of F-measure produced for fusion of gesture (extracted features) and face is 0 017 to 0.342.
format text
author Cantos, Sherlo Yvan
Miranda, Jeriah Kjell
Tiu, Melisa Renee.
Yeung, Mary Czarinelle
author_facet Cantos, Sherlo Yvan
Miranda, Jeriah Kjell
Tiu, Melisa Renee.
Yeung, Mary Czarinelle
author_sort Cantos, Sherlo Yvan
title Marker-less gesture and facial expression base affect modeling
title_short Marker-less gesture and facial expression base affect modeling
title_full Marker-less gesture and facial expression base affect modeling
title_fullStr Marker-less gesture and facial expression base affect modeling
title_full_unstemmed Marker-less gesture and facial expression base affect modeling
title_sort marker-less gesture and facial expression base affect modeling
publisher Animo Repository
publishDate 2012
url https://animorepository.dlsu.edu.ph/etd_bachelors/11130
_version_ 1726158597104074752