A novel context-aware multimodal framework for persian sentiment analysis
Most recent works on sentiment analysis have exploited the text modality. However, millions of hours of video recordings posted on social media platforms everyday hold vital unstructured information that can be exploited to more effectively gauge public perception. Multimodal sentiment analysis o...
Saved in:
Main Authors: | , , , |
---|---|
Other Authors: | |
Format: | Article |
Language: | English |
Published: |
2022
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/160779 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
id |
sg-ntu-dr.10356-160779 |
---|---|
record_format |
dspace |
spelling |
sg-ntu-dr.10356-1607792022-08-02T08:39:36Z A novel context-aware multimodal framework for persian sentiment analysis Dashtipour, Kia Gogate, Mandar Cambria, Erik Hussain, Amir School of Computer Science and Engineering Engineering::Computer science and engineering Multimodal Sentiment Analysis Persian Sentiment Analysis Most recent works on sentiment analysis have exploited the text modality. However, millions of hours of video recordings posted on social media platforms everyday hold vital unstructured information that can be exploited to more effectively gauge public perception. Multimodal sentiment analysis offers an innovative solution to computationally understand and harvest sentiments from videos by contextually exploiting audio, visual and textual cues. In this paper, we, firstly, present a first of its kind Persian multimodal dataset comprising more than 800 utterances, as a benchmark resource for researchers to evaluate multimodal sentiment analysis approaches in Persian language. Secondly, we present a novel context-aware multimodal sentiment analysis framework, that simultaneously exploits acoustic, visual and textual cues to more accurately determine the expressed sentiment. We employ both decision-level (late) and feature-level (early) fusion methods to integrate affective cross-modal information. Experimental results demonstrate that the contextual integration of multimodal features such as textual, acoustic and visual features deliver better performance (91.39%) compared to unimodal features (89.24%). 2022-08-02T08:39:36Z 2022-08-02T08:39:36Z 2021 Journal Article Dashtipour, K., Gogate, M., Cambria, E. & Hussain, A. (2021). A novel context-aware multimodal framework for persian sentiment analysis. Neurocomputing, 457, 377-388. https://dx.doi.org/10.1016/j.neucom.2021.02.020 0925-2312 https://hdl.handle.net/10356/160779 10.1016/j.neucom.2021.02.020 2-s2.0-85107619491 457 377 388 en Neurocomputing © 2021 Elsevier B.V. All rights reserved. |
institution |
Nanyang Technological University |
building |
NTU Library |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
NTU Library |
collection |
DR-NTU |
language |
English |
topic |
Engineering::Computer science and engineering Multimodal Sentiment Analysis Persian Sentiment Analysis |
spellingShingle |
Engineering::Computer science and engineering Multimodal Sentiment Analysis Persian Sentiment Analysis Dashtipour, Kia Gogate, Mandar Cambria, Erik Hussain, Amir A novel context-aware multimodal framework for persian sentiment analysis |
description |
Most recent works on sentiment analysis have exploited the text modality.
However, millions of hours of video recordings posted on social media platforms
everyday hold vital unstructured information that can be exploited to more
effectively gauge public perception. Multimodal sentiment analysis offers an
innovative solution to computationally understand and harvest sentiments from
videos by contextually exploiting audio, visual and textual cues. In this
paper, we, firstly, present a first of its kind Persian multimodal dataset
comprising more than 800 utterances, as a benchmark resource for researchers to
evaluate multimodal sentiment analysis approaches in Persian language.
Secondly, we present a novel context-aware multimodal sentiment analysis
framework, that simultaneously exploits acoustic, visual and textual cues to
more accurately determine the expressed sentiment. We employ both
decision-level (late) and feature-level (early) fusion methods to integrate
affective cross-modal information. Experimental results demonstrate that the
contextual integration of multimodal features such as textual, acoustic and
visual features deliver better performance (91.39%) compared to unimodal
features (89.24%). |
author2 |
School of Computer Science and Engineering |
author_facet |
School of Computer Science and Engineering Dashtipour, Kia Gogate, Mandar Cambria, Erik Hussain, Amir |
format |
Article |
author |
Dashtipour, Kia Gogate, Mandar Cambria, Erik Hussain, Amir |
author_sort |
Dashtipour, Kia |
title |
A novel context-aware multimodal framework for persian sentiment analysis |
title_short |
A novel context-aware multimodal framework for persian sentiment analysis |
title_full |
A novel context-aware multimodal framework for persian sentiment analysis |
title_fullStr |
A novel context-aware multimodal framework for persian sentiment analysis |
title_full_unstemmed |
A novel context-aware multimodal framework for persian sentiment analysis |
title_sort |
novel context-aware multimodal framework for persian sentiment analysis |
publishDate |
2022 |
url |
https://hdl.handle.net/10356/160779 |
_version_ |
1743119570224283648 |