Fusing pairwise modalities for emotion recognition in conversations
Multimodal fusion has the potential to significantly enhance model performance in the domain of Emotion Recognition in Conversations (ERC) by efficiently integrating information from diverse modalities. However, existing methods face challenges as they directly integrate information from different m...
Saved in:
Main Authors: | Fan, Chunxiao, Lin, Jie, Mao, Rui, Cambria, Erik |
---|---|
其他作者: | School of Computer Science and Engineering |
格式: | Article |
語言: | English |
出版: |
2024
|
主題: | |
在線閱讀: | https://hdl.handle.net/10356/175811 |
標簽: |
添加標簽
沒有標簽, 成為第一個標記此記錄!
|
相似書籍
-
Revisiting disentanglement and fusion on modality and context in conversational multimodal emotion recognition
由: LI, Bobo, et al.
出版: (2023) -
Fusing Heterogeneous Data for Alzheimer's Disease Classification
由: Pillai, P. S., et al.
出版: (2015) -
Fusing topology contexts and logical rules in language models for knowledge graph completion
由: Lin, Qika, et al.
出版: (2023) -
Towards robust and efficient multimodal representation learning and fusion
由: Guo, Xiaobao
出版: (2025) -
Feature fusion with covariance matrix regularization in face recognition
由: Lu, Ze, et al.
出版: (2018)