SAM-D2: Spontaneous affect modeling using dimensionally-labeled data

Human affect is continuous rather than discrete. Various affect dimensions represent emotions better than through the use of categorical labels because they best embody the non-basic and complex nature of everyday human expressions. Moreover, spontaneous data ensures more variety of emotion compared...

Full description

Saved in:
Bibliographic Details
Main Authors: Latorre, Avelino Alejandro L., Solomon, Katrina Ysabel C., Tensuan, Juan Paolo S.
Format: text
Language:English
Published: Animo Repository 2013
Online Access:https://animorepository.dlsu.edu.ph/etd_bachelors/12167
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: De La Salle University
Language: English
id oai:animorepository.dlsu.edu.ph:etd_bachelors-12812
record_format eprints
spelling oai:animorepository.dlsu.edu.ph:etd_bachelors-128122021-09-23T07:33:04Z SAM-D2: Spontaneous affect modeling using dimensionally-labeled data Latorre, Avelino Alejandro L. Solomon, Katrina Ysabel C. Tensuan, Juan Paolo S. Human affect is continuous rather than discrete. Various affect dimensions represent emotions better than through the use of categorical labels because they best embody the non-basic and complex nature of everyday human expressions. Moreover, spontaneous data ensures more variety of emotion compared human expressions. Moreover, spontaneous data ensures more variety of emotion compared to acted data as subjects are constrained with their expressions and they are limited to expressing discrete emotions under acted schemes. Thus it is better to engage in the use of spontaneous expressions. The focus of this research is to build multimodal models for spontaneous human affect analysis. This requires a dimensionally-labeled database which is the basis for creating the affect models. Previous studies on spontaneous and dimensionally-labeled data have been undertaken before with induced data. In this study, the use of naturally spontaneous data is explored on using the data of the Filipino Multimodal Emotion Database (FiMED2). FiMED2 is annotated with dimensional labels of valence and arousal values. Inter-coder agreement of continuous data is resolved through statistical methods. Multimodal affect models for the face and voice were built using machine learning algorithms where the Support Vector Machine for Regression performed the best. The results for the voice modality were particularly better in comparison with previous research on continuous data. Decision-level fusion was used to merge the results of the two modalities. Experiments with relation to feature selection and gender difference were also performed. 2013-01-01T08:00:00Z text https://animorepository.dlsu.edu.ph/etd_bachelors/12167 Bachelor's Theses English Animo Repository
institution De La Salle University
building De La Salle University Library
continent Asia
country Philippines
Philippines
content_provider De La Salle University Library
collection DLSU Institutional Repository
language English
description Human affect is continuous rather than discrete. Various affect dimensions represent emotions better than through the use of categorical labels because they best embody the non-basic and complex nature of everyday human expressions. Moreover, spontaneous data ensures more variety of emotion compared human expressions. Moreover, spontaneous data ensures more variety of emotion compared to acted data as subjects are constrained with their expressions and they are limited to expressing discrete emotions under acted schemes. Thus it is better to engage in the use of spontaneous expressions. The focus of this research is to build multimodal models for spontaneous human affect analysis. This requires a dimensionally-labeled database which is the basis for creating the affect models. Previous studies on spontaneous and dimensionally-labeled data have been undertaken before with induced data. In this study, the use of naturally spontaneous data is explored on using the data of the Filipino Multimodal Emotion Database (FiMED2). FiMED2 is annotated with dimensional labels of valence and arousal values. Inter-coder agreement of continuous data is resolved through statistical methods. Multimodal affect models for the face and voice were built using machine learning algorithms where the Support Vector Machine for Regression performed the best. The results for the voice modality were particularly better in comparison with previous research on continuous data. Decision-level fusion was used to merge the results of the two modalities. Experiments with relation to feature selection and gender difference were also performed.
format text
author Latorre, Avelino Alejandro L.
Solomon, Katrina Ysabel C.
Tensuan, Juan Paolo S.
spellingShingle Latorre, Avelino Alejandro L.
Solomon, Katrina Ysabel C.
Tensuan, Juan Paolo S.
SAM-D2: Spontaneous affect modeling using dimensionally-labeled data
author_facet Latorre, Avelino Alejandro L.
Solomon, Katrina Ysabel C.
Tensuan, Juan Paolo S.
author_sort Latorre, Avelino Alejandro L.
title SAM-D2: Spontaneous affect modeling using dimensionally-labeled data
title_short SAM-D2: Spontaneous affect modeling using dimensionally-labeled data
title_full SAM-D2: Spontaneous affect modeling using dimensionally-labeled data
title_fullStr SAM-D2: Spontaneous affect modeling using dimensionally-labeled data
title_full_unstemmed SAM-D2: Spontaneous affect modeling using dimensionally-labeled data
title_sort sam-d2: spontaneous affect modeling using dimensionally-labeled data
publisher Animo Repository
publishDate 2013
url https://animorepository.dlsu.edu.ph/etd_bachelors/12167
_version_ 1712577615568568320