Multimodal emotion recognition system for spontaneous vocal and facial signals: SMERFS
Human computer interaction is moving towards giving computers the ability to adapt and give feedback in accordance to a user's emotion. Initial researches on multimodal emotion recognition shows that combining both vocal and facial signals performed better compared to using physiological signal...
Saved in:
Main Authors: | Dy, Marc Lanze Ivan C., Espinoza, Ivan Vener L., Go, Paul Patrick V., Mendez, Charles Martin M. |
---|---|
Format: | text |
Language: | English |
Published: |
Animo Repository
2010
|
Subjects: | |
Online Access: | https://animorepository.dlsu.edu.ph/etd_bachelors/14653 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | De La Salle University |
Language: | English |
Similar Items
-
Low-shot Object Detection via Classification Refinement.
by: Li, Yiting, et al.
Published: (2020) -
Audiovisual affect recognition in spontaneous Filipino laughter
by: Galvan, Christopher R., et al.
Published: (2011) -
Cross-modal credibility modelling for EEG-based multimodal emotion recognition
by: Zhang, Yuzhe, et al.
Published: (2024) -
Investigating biological feature detectors in simple pattern recognition towards complex saliency prediction tasks
by: Cordel, Macario O., II
Published: (2018) -
Multi-Source Domain Adaptation for Visual Sentiment Classification
by: Chuang Lin, et al.
Published: (2020)