Fuzzy commonsense reasoning for multimodal sentiment analysis

The majority of user-generated content posted online is in the form of text, images and videos but also physiological signals in games. AffectiveSpace is a vector space of affective commonsense available for English text but not for other languages nor other modalities such as electrocardiogram sign...

Full description

Saved in:
Bibliographic Details
Main Authors: Chaturvedi, Iti, Satapathy, Ranjan, Cavallari, Sandro, Cambria, Erik
Other Authors: School of Computer Science and Engineering
Format: Article
Language:English
Published: 2021
Subjects:
Online Access:https://hdl.handle.net/10356/151519
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:The majority of user-generated content posted online is in the form of text, images and videos but also physiological signals in games. AffectiveSpace is a vector space of affective commonsense available for English text but not for other languages nor other modalities such as electrocardiogram signals. We overcome this limitation by using deep learning to extract features from each modality and then projecting them to a common AffectiveSpace that has been clustered into different emotions. Because, in the real world, individuals tend to have partial or mixed sentiments about an opinion target, we use a fuzzy logic classifier to predict the degree of a particular emotion in AffectiveSpace. The combined model of deep convolutional neural networks and fuzzy logic is termed Convolutional Fuzzy Sentiment Classifier. Lastly, because the computational complexity of a fuzzy classifier is exponential with respect to the number of features, we project features to a four dimensional emotion space in order to speed up the classification performance.