Fusing pairwise modalities for emotion recognition in conversations
Multimodal fusion has the potential to significantly enhance model performance in the domain of Emotion Recognition in Conversations (ERC) by efficiently integrating information from diverse modalities. However, existing methods face challenges as they directly integrate information from different m...
Saved in:
Main Authors: | Fan, Chunxiao, Lin, Jie, Mao, Rui, Cambria, Erik |
---|---|
Other Authors: | School of Computer Science and Engineering |
Format: | Article |
Language: | English |
Published: |
2024
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/175811 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Similar Items
-
Revisiting disentanglement and fusion on modality and context in conversational multimodal emotion recognition
by: LI, Bobo, et al.
Published: (2023) -
Fusing Heterogeneous Data for Alzheimer's Disease Classification
by: Pillai, P. S., et al.
Published: (2015) -
Fusing topology contexts and logical rules in language models for knowledge graph completion
by: Lin, Qika, et al.
Published: (2023) -
Feature fusion with covariance matrix regularization in face recognition
by: Lu, Ze, et al.
Published: (2018) -
Multimodal fusion for multimedia analysis: A survey
by: Atrey, P.K., et al.
Published: (2013)