Multimodal learning with deep Boltzmann Machine for emotion prediction in user generated videos
Detecting emotions from user-generated videos, such as“anger” and “sadness”, has attracted widespread interest recently. The problem is challenging as effectively representing video data with multi-view information (e.g., audio, video or text) is not trivial. In contrast to the existing works that e...
Saved in:
Main Authors: | , |
---|---|
Format: | text |
Language: | English |
Published: |
Institutional Knowledge at Singapore Management University
2015
|
Subjects: | |
Online Access: | https://ink.library.smu.edu.sg/sis_research/6502 https://ink.library.smu.edu.sg/context/sis_research/article/7505/viewcontent/2671188.2749400.pdf |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Singapore Management University |
Language: | English |
Summary: | Detecting emotions from user-generated videos, such as“anger” and “sadness”, has attracted widespread interest recently. The problem is challenging as effectively representing video data with multi-view information (e.g., audio, video or text) is not trivial. In contrast to the existing works that extract features from each modality (view) separately followed by early or late fusion, we propose to learn a joint density model over the space of multi-modal inputs (including visual, auditory and textual modalities) with Deep Boltzmann Machine (DBM). The model is trained directly on the user-generated Web videos without any labeling effort. More importantly, the deep architecture enlightens the possibility of discovering the highly non-linear relationships that exist between lowlevel features across different modalities. The experiment results show that the DBM model learns joint representation complementary to the hand-crafted visual and auditory features, leading to 7.7% performance improvement in classification accuracy on the recently released VideoEmotion dataset. |
---|