Exploring cross-modality utilization in recommender systems
Multimodal recommender systems alleviate the sparsity of historical user-item interactions. They are commonly catalogued based on the type of auxiliary data (modality) they leverage, such as preference data plus user-network (social), user/item texts (textual), or item images (visual) respectively....
Saved in:
Main Authors: | TRUONG, Quoc Tuan, SALAH, Aghiles, TRAN, Thanh-Binh, GUO, Jingyao, LAUW, Hady W. |
---|---|
Format: | text |
Language: | English |
Published: |
Institutional Knowledge at Singapore Management University
2021
|
Subjects: | |
Online Access: | https://ink.library.smu.edu.sg/sis_research/5950 https://ink.library.smu.edu.sg/context/sis_research/article/6953/viewcontent/ic21.pdf |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Singapore Management University |
Language: | English |
Similar Items
-
Multi-modal recommender systems: Hands-on exploration
by: TRUONG, Quoc Tuan, et al.
Published: (2021) -
Multi-modal recommender systems: Hands-on exploration
by: TRUONG, Quoc Tuan, et al.
Published: (2021) -
Cornac: A comparative framework for multimodal recommender systems
by: SALAH, Aghiles, et al.
Published: (2020) -
Towards source-aligned variational models for cross-domain recommendation
by: SALAH, Aghiles, et al.
Published: (2021) -
Collaborative cross-modal fusion with Large Language Model for recommendation
by: LIU, Zhongzhou, et al.
Published: (2024)