Multimodal sentiment analysis using hierarchical fusion with context modeling
Multimodal sentiment analysis is a very actively growing field of research. A promising area of opportunity in this field is to improve the multimodal fusion mechanism. We present a novel feature fusion strategy that proceeds in a hierarchical fashion, first fusing the modalities two in two and only...
Saved in:
Main Authors: | , , , , |
---|---|
Other Authors: | |
Format: | Article |
Language: | English |
Published: |
2020
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/139583 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
id |
sg-ntu-dr.10356-139583 |
---|---|
record_format |
dspace |
spelling |
sg-ntu-dr.10356-1395832020-05-20T07:01:54Z Multimodal sentiment analysis using hierarchical fusion with context modeling Majumder, Navonil Hazarika, Devamanyu Gelbukh, Alexander Cambria, Erik Poria, Soujanya School of Computer Science and Engineering Engineering::Computer science and engineering Multimodal Fusion Sentiment Analysis Multimodal sentiment analysis is a very actively growing field of research. A promising area of opportunity in this field is to improve the multimodal fusion mechanism. We present a novel feature fusion strategy that proceeds in a hierarchical fashion, first fusing the modalities two in two and only then fusing all three modalities. On multimodal sentiment analysis of individual utterances, our strategy outperforms conventional concatenation of features by 1%, which amounts to 5% reduction in error rate. On utterance-level multimodal sentiment analysis of multi-utterance video clips, for which current state-of-the-art techniques incorporate contextual information from other utterances of the same clip, our hierarchical fusion gives up to 2.4% (almost 10% error rate reduction) over currently used concatenation. The implementation of our method is publicly available in the form of open-source code. 2020-05-20T07:01:54Z 2020-05-20T07:01:54Z 2018 Journal Article Majumder, N., Hazarika, D., Gelbukh, A., Cambria, E., & Poria, S. (2018). Multimodal sentiment analysis using hierarchical fusion with context modeling. Knowledge-Based Systems, 161, 124-133. doi:10.1016/j.knosys.2018.07.041 0950-7051 https://hdl.handle.net/10356/139583 10.1016/j.knosys.2018.07.041 2-s2.0-85050999093 161 124 133 en Knowledge-Based Systems © 2018 Elsevier B.V. All rights reserved. |
institution |
Nanyang Technological University |
building |
NTU Library |
country |
Singapore |
collection |
DR-NTU |
language |
English |
topic |
Engineering::Computer science and engineering Multimodal Fusion Sentiment Analysis |
spellingShingle |
Engineering::Computer science and engineering Multimodal Fusion Sentiment Analysis Majumder, Navonil Hazarika, Devamanyu Gelbukh, Alexander Cambria, Erik Poria, Soujanya Multimodal sentiment analysis using hierarchical fusion with context modeling |
description |
Multimodal sentiment analysis is a very actively growing field of research. A promising area of opportunity in this field is to improve the multimodal fusion mechanism. We present a novel feature fusion strategy that proceeds in a hierarchical fashion, first fusing the modalities two in two and only then fusing all three modalities. On multimodal sentiment analysis of individual utterances, our strategy outperforms conventional concatenation of features by 1%, which amounts to 5% reduction in error rate. On utterance-level multimodal sentiment analysis of multi-utterance video clips, for which current state-of-the-art techniques incorporate contextual information from other utterances of the same clip, our hierarchical fusion gives up to 2.4% (almost 10% error rate reduction) over currently used concatenation. The implementation of our method is publicly available in the form of open-source code. |
author2 |
School of Computer Science and Engineering |
author_facet |
School of Computer Science and Engineering Majumder, Navonil Hazarika, Devamanyu Gelbukh, Alexander Cambria, Erik Poria, Soujanya |
format |
Article |
author |
Majumder, Navonil Hazarika, Devamanyu Gelbukh, Alexander Cambria, Erik Poria, Soujanya |
author_sort |
Majumder, Navonil |
title |
Multimodal sentiment analysis using hierarchical fusion with context modeling |
title_short |
Multimodal sentiment analysis using hierarchical fusion with context modeling |
title_full |
Multimodal sentiment analysis using hierarchical fusion with context modeling |
title_fullStr |
Multimodal sentiment analysis using hierarchical fusion with context modeling |
title_full_unstemmed |
Multimodal sentiment analysis using hierarchical fusion with context modeling |
title_sort |
multimodal sentiment analysis using hierarchical fusion with context modeling |
publishDate |
2020 |
url |
https://hdl.handle.net/10356/139583 |
_version_ |
1681059131080835072 |