VistaNet: Visual Aspect Attention Network for multimodal sentiment analysis
Detecting the sentiment expressed by a document is a key task for many applications, e.g., modeling user preferences, monitoring consumer behaviors, assessing product quality. Traditionally, the sentiment analysis task primarily relies on textual content. Fueled by the rise of mobile phones that are...
Saved in:
Main Authors: | TRUONG, Quoc Tuan, LAUW, Hady Wirawan |
---|---|
Format: | text |
Language: | English |
Published: |
Institutional Knowledge at Singapore Management University
2019
|
Subjects: | |
Online Access: | https://ink.library.smu.edu.sg/sis_research/4700 https://ink.library.smu.edu.sg/context/sis_research/article/5703/viewcontent/aaai19a.pdf |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Singapore Management University |
Language: | English |
Similar Items
-
Entity-sensitive attention and fusion network for entity-level multimodal sentiment classification
by: YU, Jianfei, et al.
Published: (2020) -
A novel context-aware multimodal framework for persian sentiment analysis
by: Dashtipour, Kia, et al.
Published: (2022) -
M2Lens: Visualizing and explaining multimodal models for sentiment analysis
by: WANG, Xingbo, et al.
Published: (2022) -
Multimodal sentiment analysis using hierarchical fusion with context modeling
by: Majumder, Navonil, et al.
Published: (2020) -
Modeling sentiments and preferences from multimodal data
by: TRUONG, Quoc Tuan
Published: (2022)