Multimodal learning with deep Boltzmann Machine for emotion prediction in user generated videos
Detecting emotions from user-generated videos, such as“anger” and “sadness”, has attracted widespread interest recently. The problem is challenging as effectively representing video data with multi-view information (e.g., audio, video or text) is not trivial. In contrast to the existing works that e...
Saved in:
Main Authors: | PANG, Lei, NGO, Chong-wah |
---|---|
Format: | text |
Language: | English |
Published: |
Institutional Knowledge at Singapore Management University
2015
|
Subjects: | |
Online Access: | https://ink.library.smu.edu.sg/sis_research/6502 https://ink.library.smu.edu.sg/context/sis_research/article/7505/viewcontent/2671188.2749400.pdf |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Singapore Management University |
Language: | English |
Similar Items
-
Deep multimodal learning for affective analysis and retrieval
by: PANG, Lei, et al.
Published: (2015) -
Fusion of multimodal embeddings for ad-hoc video search
by: FRANCIS, Danny, et al.
Published: (2019) -
Revisiting disentanglement and fusion on modality and context in conversational multimodal emotion recognition
by: LI, Bobo, et al.
Published: (2023) -
Predicting Drought Indices in Nakhon Ratchasima Province using a Deep Belief Network with Restricted Boltzmann Machines
by: Sureeluk, Ma
Published: (2019) -
Temperature based restricted boltzmann machines
by: Li, Guoqi, et al.
Published: (2018)