Cross-modal credibility modelling for EEG-based multimodal emotion recognition
Objective. The study of emotion recognition through electroencephalography (EEG) has garnered significant attention recently. Integrating EEG with other peripheral physiological signals may greatly enhance performance in emotion recognition. Nonetheless, existing approaches still suffer from two pre...
Saved in:
Main Authors: | Zhang, Yuzhe, Liu, Huan, Wang, Di, Zhang, Dalin, Lou, Tianyu, Zheng, Qinghua, Quek, Chai |
---|---|
Other Authors: | School of Computer Science and Engineering |
Format: | Article |
Language: | English |
Published: |
2024
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/179031 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Similar Items
-
Revisiting disentanglement and fusion on modality and context in conversational multimodal emotion recognition
by: LI, Bobo, et al.
Published: (2023) -
Visual-to-EEG cross-modal knowledge distillation for continuous emotion recognition
by: Zhang, Su, et al.
Published: (2022) -
A multimodal emotion corpus for Filipino and its uses
by: Cu, Jocelynn W., et al.
Published: (2013) -
Comprehensive analysis of feature extraction methods for emotion recognition from multichannel EEG recordings
by: Yuvaraj, Rajamanickam, et al.
Published: (2023) -
Sentic blending: Scalable multimodal fusion for the continuous interpretation of semantics and sentics
by: Cambria, E., et al.
Published: (2014)