Neural architectures for faster deep learning-based collaborative filtering
For a recommendation system where an algorithm recommend items to users, data is collected when the user interacts with the website/mobile application. The data collected could be the user information, including their demographic information, item information, the interaction data of what items the...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Thesis-Master by Research |
Language: | English |
Published: |
Nanyang Technological University
2021
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/152759 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | For a recommendation system where an algorithm recommend items to users, data is collected when the user interacts with the website/mobile application. The data collected could be the user information, including their demographic information, item information, the interaction data of what items the user interacted with, and the context information when the user interacts with a certain item. Collaborative filtering is a common method for recommendation systems, where only the past interaction data is considered, without extra information of the users, items, and interaction context. Deep learning has been successful applied in multiple fields such as computer vision and natural language processing. It has also been utilized for collaborative filtering, and indicated promising results. However, some of the deep learning-based collaborative filtering methods require a lot of computational resources to perform recommendations, which would cost more time to recommend and increase the energy consumption. In this thesis, a new deep learning architecture is developed for potential faster recommendation via collaborative filtering without sacrificing the recommendation quality. Extensive experiments have been performed to evaluate the performance and recommendation quality of our new neural architecture and training method. |
---|