Adapting BERT for target-oriented multimodal sentiment classification
As an important task in Sentiment Analysis, Target-oriented Sentiment Classification (TSC) aims to identify sentiment polarities over each opinion target in a sentence. However, existing approaches to this task primarily rely on the textual content, but ignoring the other increasingly popular multim...
Saved in:
Main Authors: | , |
---|---|
Format: | text |
Language: | English |
Published: |
Institutional Knowledge at Singapore Management University
2019
|
Subjects: | |
Online Access: | https://ink.library.smu.edu.sg/sis_research/4441 https://ink.library.smu.edu.sg/context/sis_research/article/5444/viewcontent/9._Adapting_BERT_for_Target_Oriented_Multimodal_Sentiment_Classification__IJCAI2019_.pdf |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Singapore Management University |
Language: | English |
id |
sg-smu-ink.sis_research-5444 |
---|---|
record_format |
dspace |
spelling |
sg-smu-ink.sis_research-54442020-04-08T05:49:20Z Adapting BERT for target-oriented multimodal sentiment classification YU, Jianfei JIANG, Jing As an important task in Sentiment Analysis, Target-oriented Sentiment Classification (TSC) aims to identify sentiment polarities over each opinion target in a sentence. However, existing approaches to this task primarily rely on the textual content, but ignoring the other increasingly popular multimodal data sources (e.g., images), which can enhance the robustness of these text-based models. Motivated by this observation and inspired by the recently proposed BERT architecture, we study Target-oriented Multimodal Sentiment Classification (TMSC) and propose a multimodal BERT architecture. To model intra-modality dynamics, we first apply BERT to obtain target-sensitive textual representations. We then borrow the idea from self-attention and design a target attention mechanism to perform target-image matching to derive target-sensitive visual representations. To model inter-modality dynamics, we further propose to stack a set of self-attention layers to capture multimodal interactions. Experimental results show that our model can outperform several highly competitive approaches for TSC and TMSC. 2019-08-01T07:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/4441 info:doi/10.24963/ijcai.2019/751 https://ink.library.smu.edu.sg/context/sis_research/article/5444/viewcontent/9._Adapting_BERT_for_Target_Oriented_Multimodal_Sentiment_Classification__IJCAI2019_.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Natural Language Processing Sentiment Analysis and Text Mining Artificial Intelligence and Robotics Databases and Information Systems Numerical Analysis and Scientific Computing |
institution |
Singapore Management University |
building |
SMU Libraries |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
SMU Libraries |
collection |
InK@SMU |
language |
English |
topic |
Natural Language Processing Sentiment Analysis and Text Mining Artificial Intelligence and Robotics Databases and Information Systems Numerical Analysis and Scientific Computing |
spellingShingle |
Natural Language Processing Sentiment Analysis and Text Mining Artificial Intelligence and Robotics Databases and Information Systems Numerical Analysis and Scientific Computing YU, Jianfei JIANG, Jing Adapting BERT for target-oriented multimodal sentiment classification |
description |
As an important task in Sentiment Analysis, Target-oriented Sentiment Classification (TSC) aims to identify sentiment polarities over each opinion target in a sentence. However, existing approaches to this task primarily rely on the textual content, but ignoring the other increasingly popular multimodal data sources (e.g., images), which can enhance the robustness of these text-based models. Motivated by this observation and inspired by the recently proposed BERT architecture, we study Target-oriented Multimodal Sentiment Classification (TMSC) and propose a multimodal BERT architecture. To model intra-modality dynamics, we first apply BERT to obtain target-sensitive textual representations. We then borrow the idea from self-attention and design a target attention mechanism to perform target-image matching to derive target-sensitive visual representations. To model inter-modality dynamics, we further propose to stack a set of self-attention layers to capture multimodal interactions. Experimental results show that our model can outperform several highly competitive approaches for TSC and TMSC. |
format |
text |
author |
YU, Jianfei JIANG, Jing |
author_facet |
YU, Jianfei JIANG, Jing |
author_sort |
YU, Jianfei |
title |
Adapting BERT for target-oriented multimodal sentiment classification |
title_short |
Adapting BERT for target-oriented multimodal sentiment classification |
title_full |
Adapting BERT for target-oriented multimodal sentiment classification |
title_fullStr |
Adapting BERT for target-oriented multimodal sentiment classification |
title_full_unstemmed |
Adapting BERT for target-oriented multimodal sentiment classification |
title_sort |
adapting bert for target-oriented multimodal sentiment classification |
publisher |
Institutional Knowledge at Singapore Management University |
publishDate |
2019 |
url |
https://ink.library.smu.edu.sg/sis_research/4441 https://ink.library.smu.edu.sg/context/sis_research/article/5444/viewcontent/9._Adapting_BERT_for_Target_Oriented_Multimodal_Sentiment_Classification__IJCAI2019_.pdf |
_version_ |
1770574838796124160 |