Multimodal audio-visual emotion detection
Audio and visual utterances in video are temporally and semantically dependent to each other so modeling of temporal and contextual characteristics plays a vital role in understanding of conflicting or supporting emotional cues in audio-visual emotion recognition (AVER). We introduced a novel tempor...
Saved in:
Main Author: | Chaudhary, Nitesh Kumar |
---|---|
Other Authors: | Jagath C Rajapakse |
Format: | Thesis-Master by Research |
Language: | English |
Published: |
Nanyang Technological University
2021
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/153490 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Similar Items
-
Multimodal continuous emotion analysis
by: Zhang, Su
Published: (2023) -
An iPhone application : audio emotion recognition
by: Quek, Wei Yang
Published: (2015) -
Audio-Visual Integration in Multimodal Communication
by: Chen T., et al.
Published: (2018) -
Investigation of multimodality sensors for real-time emotion assessment
by: Chua, Yong Lun
Published: (2016) -
Audio-visual source separation under visual-agnostic condition
by: He, Yixuan
Published: (2023)