Data efficient deep multimodal learning
Multimodal learning, which enables neural networks to process and integrate information from various sensory modalities such as vision, language, and sound, has become increasingly important in applications ranging from affective computing and healthcare to advanced multimodal chatbots. Despite its...
Saved in:
Main Author: | Shen, Meng |
---|---|
Other Authors: | Deepu Rajan |
Format: | Thesis-Doctor of Philosophy |
Language: | English |
Published: |
Nanyang Technological University
2025
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/182346 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Similar Items
-
Towards robust and efficient multimodal representation learning and fusion
by: Guo, Xiaobao
Published: (2025) -
Deep multimodal learning for affective analysis and retrieval
by: PANG, Lei, et al.
Published: (2015) -
Cornac: A comparative framework for multimodal recommender systems
by: SALAH, Aghiles, et al.
Published: (2020) -
Deep DeePC: data-enabled predictive control with low or no online optimization using deep learning
by: Zhang, Xuewen, et al.
Published: (2025) -
Large multimodal models for visual reasoning
by: Duong, Ngoc Yen
Published: (2024)