Improving collaborative filtering with self-supervised GCNS and autoencoder base multimodal embeddings

Recommender systems play a crucial role in enhancing user experience by delivering personalized suggestions across diverse domains. Effective representation learning is vital in these systems, as high-quality embeddings are key to accurate recommendations, as evidenced by various studies. However, c...

Full description

Saved in:
Bibliographic Details
Main Author: Truong, Vinh Khai
Other Authors: Luo Siqiang
Format: Final Year Project
Language:English
Published: Nanyang Technological University 2024
Subjects:
Online Access:https://hdl.handle.net/10356/180714
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-180714
record_format dspace
spelling sg-ntu-dr.10356-1807142024-10-21T23:35:49Z Improving collaborative filtering with self-supervised GCNS and autoencoder base multimodal embeddings Truong, Vinh Khai Luo Siqiang College of Computing and Data Science siqiang.luo@ntu.edu.sg Computer and Information Science Self-supervised GCN Recommender system Contrastive learning Autoencoder Multi-modal Recommender systems play a crucial role in enhancing user experience by delivering personalized suggestions across diverse domains. Effective representation learning is vital in these systems, as high-quality embeddings are key to accurate recommendations, as evidenced by various studies. However, challenges such as data sparsity and the difficulty of retrieving labeled data hinder the performance of traditional approaches. To address these issues, this project proposes a novel methodology utilizing self-supervised Graph Convolutional Networks (GCN) to learn user embeddings that capture hidden preferences from interactions with items. The item embeddings are constructed from multimodal data (including text, numerical, and categorical features) and dimensionality reduced with an autoencoder. During training, these embeddings are further fine-tuned using contrastive loss, allowing the model to leverage self-supervised learning techniques. Leveraging the Yelp dataset, our framework synthesizes diverse item features into unified representations, providing deeper insights into item interrelations. User embeddings are adaptively adjusted based on positive interactions, uncovering latent preferences even in the absence of extensive historical data. The integration of contrastive learning effectively differentiates preferred items from less relevant options, enhancing the accuracy of recommendations. Our findings demonstrate the efficacy of this comprehensive approach in addressing the complexities of collaborative filtering and the challenges posed by data sparsity, showcasing its potential for delivering personalized and relevant recommendations across various applications. Bachelor's degree 2024-10-21T23:35:49Z 2024-10-21T23:35:49Z 2024 Final Year Project (FYP) Truong, V. K. (2024). Improving collaborative filtering with self-supervised GCNS and autoencoder base multimodal embeddings. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/180714 https://hdl.handle.net/10356/180714 en application/pdf Nanyang Technological University
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Computer and Information Science
Self-supervised GCN
Recommender system
Contrastive learning
Autoencoder
Multi-modal
spellingShingle Computer and Information Science
Self-supervised GCN
Recommender system
Contrastive learning
Autoencoder
Multi-modal
Truong, Vinh Khai
Improving collaborative filtering with self-supervised GCNS and autoencoder base multimodal embeddings
description Recommender systems play a crucial role in enhancing user experience by delivering personalized suggestions across diverse domains. Effective representation learning is vital in these systems, as high-quality embeddings are key to accurate recommendations, as evidenced by various studies. However, challenges such as data sparsity and the difficulty of retrieving labeled data hinder the performance of traditional approaches. To address these issues, this project proposes a novel methodology utilizing self-supervised Graph Convolutional Networks (GCN) to learn user embeddings that capture hidden preferences from interactions with items. The item embeddings are constructed from multimodal data (including text, numerical, and categorical features) and dimensionality reduced with an autoencoder. During training, these embeddings are further fine-tuned using contrastive loss, allowing the model to leverage self-supervised learning techniques. Leveraging the Yelp dataset, our framework synthesizes diverse item features into unified representations, providing deeper insights into item interrelations. User embeddings are adaptively adjusted based on positive interactions, uncovering latent preferences even in the absence of extensive historical data. The integration of contrastive learning effectively differentiates preferred items from less relevant options, enhancing the accuracy of recommendations. Our findings demonstrate the efficacy of this comprehensive approach in addressing the complexities of collaborative filtering and the challenges posed by data sparsity, showcasing its potential for delivering personalized and relevant recommendations across various applications.
author2 Luo Siqiang
author_facet Luo Siqiang
Truong, Vinh Khai
format Final Year Project
author Truong, Vinh Khai
author_sort Truong, Vinh Khai
title Improving collaborative filtering with self-supervised GCNS and autoencoder base multimodal embeddings
title_short Improving collaborative filtering with self-supervised GCNS and autoencoder base multimodal embeddings
title_full Improving collaborative filtering with self-supervised GCNS and autoencoder base multimodal embeddings
title_fullStr Improving collaborative filtering with self-supervised GCNS and autoencoder base multimodal embeddings
title_full_unstemmed Improving collaborative filtering with self-supervised GCNS and autoencoder base multimodal embeddings
title_sort improving collaborative filtering with self-supervised gcns and autoencoder base multimodal embeddings
publisher Nanyang Technological University
publishDate 2024
url https://hdl.handle.net/10356/180714
_version_ 1814777790191370240