Deep multimodal learning for affective analysis and retrieval
Social media has been a convenient platform for voicing opinions through posting messages, ranging from tweeting a short text to uploading a media file, or any combination of messages. Understanding the perceived emotions inherently underlying these user-generated contents (UGC) could bring light to...
Saved in:
Main Authors: | PANG, Lei, ZHU, Shiai, NGO, Chong-wah |
---|---|
Format: | text |
Language: | English |
Published: |
Institutional Knowledge at Singapore Management University
2015
|
Subjects: | |
Online Access: | https://ink.library.smu.edu.sg/sis_research/6356 https://ink.library.smu.edu.sg/context/sis_research/article/7359/viewcontent/deep_multimodal_emotion_pl.pdf |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Singapore Management University |
Language: | English |
Similar Items
-
Multimodal learning with deep Boltzmann Machine for emotion prediction in user generated videos
by: PANG, Lei, et al.
Published: (2015) -
Cross-modal recipe retrieval with stacked attention model
by: CHEN, Jing-Jing, et al.
Published: (2018) -
Cross-modal recipe retrieval: How to cook this dish?
by: CHEN, Jingjing, et al.
Published: (2017) -
Efficient cross-modal video retrieval with meta-optimized frames
by: HAN, Ning, et al.
Published: (2024) -
Deep understanding of cooking procedure for cross-modal recipe retrieval
by: CHEN, Jingjing, et al.
Published: (2018)