Entity-sensitive attention and fusion network for entity-level multimodal sentiment classification
Entity-level (aka target-dependent) sentiment analysis of social media posts has recently attracted increasing attention, and its goal is to predict the sentiment orientations over individual target entities mentioned in users' posts. Most existing approaches to this task primarily rely on the...
Saved in:
Main Authors: | , |
---|---|
Format: | text |
Language: | English |
Published: |
Institutional Knowledge at Singapore Management University
2020
|
Subjects: | |
Online Access: | https://ink.library.smu.edu.sg/sis_research/5504 https://ink.library.smu.edu.sg/context/sis_research/article/6507/viewcontent/TASLP.2019.2957872.pdf |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Singapore Management University |
Language: | English |
id |
sg-smu-ink.sis_research-6507 |
---|---|
record_format |
dspace |
spelling |
sg-smu-ink.sis_research-65072021-01-07T14:53:38Z Entity-sensitive attention and fusion network for entity-level multimodal sentiment classification YU, Jianfei JIANG, Jing Entity-level (aka target-dependent) sentiment analysis of social media posts has recently attracted increasing attention, and its goal is to predict the sentiment orientations over individual target entities mentioned in users' posts. Most existing approaches to this task primarily rely on the textual content, but fail to consider the other important data sources (e.g., images, videos, and user profiles), which can potentially enhance these text-based approaches. Motivated by the observation, we study entity-level multimodal sentiment classification in this article, and aim to explore the usefulness of images for entity-level sentiment detection in social media posts. Specifically, we propose an Entity-Sensitive Attention and Fusion Network (ESAFN) for this task. First, to capture the intra-modality dynamics, ESAFN leverages an effective attention mechanism to generate entity-sensitive textual representations, followed by aggregating them with a textual fusion layer. Next, ESAFN learns the entity-sensitive visual representation with an entity-oriented visual attention mechanism, followed by a gated mechanism to eliminate the noisy visual context. Moreover, to capture the inter-modality dynamics, ESAFN further fuses the textual and visual representations with a bilinear interaction layer. To evaluate the effectiveness of ESAFN, we manually annotate the sentiment orientation over each given entity based on two recently released multimodal NER datasets, and show that ESAFN can significantly outperform several highly competitive unimodal and multimodal methods. 2020-01-01T08:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/5504 info:doi/10.1109/TASLP.2019.2957872 https://ink.library.smu.edu.sg/context/sis_research/article/6507/viewcontent/TASLP.2019.2957872.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University fine-grained sentiment analysis multimodal sentiment analysis Natural language processing neural networks social media analysis Databases and Information Systems Numerical Analysis and Scientific Computing |
institution |
Singapore Management University |
building |
SMU Libraries |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
SMU Libraries |
collection |
InK@SMU |
language |
English |
topic |
fine-grained sentiment analysis multimodal sentiment analysis Natural language processing neural networks social media analysis Databases and Information Systems Numerical Analysis and Scientific Computing |
spellingShingle |
fine-grained sentiment analysis multimodal sentiment analysis Natural language processing neural networks social media analysis Databases and Information Systems Numerical Analysis and Scientific Computing YU, Jianfei JIANG, Jing Entity-sensitive attention and fusion network for entity-level multimodal sentiment classification |
description |
Entity-level (aka target-dependent) sentiment analysis of social media posts has recently attracted increasing attention, and its goal is to predict the sentiment orientations over individual target entities mentioned in users' posts. Most existing approaches to this task primarily rely on the textual content, but fail to consider the other important data sources (e.g., images, videos, and user profiles), which can potentially enhance these text-based approaches. Motivated by the observation, we study entity-level multimodal sentiment classification in this article, and aim to explore the usefulness of images for entity-level sentiment detection in social media posts. Specifically, we propose an Entity-Sensitive Attention and Fusion Network (ESAFN) for this task. First, to capture the intra-modality dynamics, ESAFN leverages an effective attention mechanism to generate entity-sensitive textual representations, followed by aggregating them with a textual fusion layer. Next, ESAFN learns the entity-sensitive visual representation with an entity-oriented visual attention mechanism, followed by a gated mechanism to eliminate the noisy visual context. Moreover, to capture the inter-modality dynamics, ESAFN further fuses the textual and visual representations with a bilinear interaction layer. To evaluate the effectiveness of ESAFN, we manually annotate the sentiment orientation over each given entity based on two recently released multimodal NER datasets, and show that ESAFN can significantly outperform several highly competitive unimodal and multimodal methods. |
format |
text |
author |
YU, Jianfei JIANG, Jing |
author_facet |
YU, Jianfei JIANG, Jing |
author_sort |
YU, Jianfei |
title |
Entity-sensitive attention and fusion network for entity-level multimodal sentiment classification |
title_short |
Entity-sensitive attention and fusion network for entity-level multimodal sentiment classification |
title_full |
Entity-sensitive attention and fusion network for entity-level multimodal sentiment classification |
title_fullStr |
Entity-sensitive attention and fusion network for entity-level multimodal sentiment classification |
title_full_unstemmed |
Entity-sensitive attention and fusion network for entity-level multimodal sentiment classification |
title_sort |
entity-sensitive attention and fusion network for entity-level multimodal sentiment classification |
publisher |
Institutional Knowledge at Singapore Management University |
publishDate |
2020 |
url |
https://ink.library.smu.edu.sg/sis_research/5504 https://ink.library.smu.edu.sg/context/sis_research/article/6507/viewcontent/TASLP.2019.2957872.pdf |
_version_ |
1770575482468696064 |