Sequential recommendation: From representation learning to reasoning

The recommender system is a crucial component of today's online services. It helps users navigate through an overwhelmingly large number of items and discovering those that interest them. Unlike general recommender systems, which recommend items based on the user's overall preferences, seq...

Full description

Saved in:
Bibliographic Details
Main Author: WANG, Lei
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2024
Subjects:
Online Access:https://ink.library.smu.edu.sg/etd_coll/593
https://ink.library.smu.edu.sg/context/etd_coll/article/1591/viewcontent/Thesis_Lei_SMU_Draft__3_.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
id sg-smu-ink.etd_coll-1591
record_format dspace
institution Singapore Management University
building SMU Libraries
continent Asia
country Singapore
Singapore
content_provider SMU Libraries
collection InK@SMU
language English
topic Sequential Recommendation
Large Language Model
Contrastive Learning
Explanation
Computer Sciences
spellingShingle Sequential Recommendation
Large Language Model
Contrastive Learning
Explanation
Computer Sciences
WANG, Lei
Sequential recommendation: From representation learning to reasoning
description The recommender system is a crucial component of today's online services. It helps users navigate through an overwhelmingly large number of items and discovering those that interest them. Unlike general recommender systems, which recommend items based on the user's overall preferences, sequential recommender systems consider the order of user-item interactions. Sequential recommendations aim to predict the next item a user will interact with, given a sequence of previously interacted items, while considering the short-term and long-term dependencies among items. In this thesis, we focus on sequential recommendation methods: from representation learning to large language model (LLM)-based reasoning. On the one hand, representation learning-based sequential recommendation methods usually feed ID embeddings of interacted items into models, such as deep neural networks, to generate user representation vectors. They then rank candidate items to create a recommendation list based on the similarity between user representation vectors and candidate item vectors. On the other hand, the LLM-based reasoning approach mainly depends on the LLM's strong reasoning ability and rich world knowledge. LLM-based reasoners require carefully designed prompts and/or demonstration examples considering the task complexity and prompt length constraint. This thesis consists of three parts. In the first part, we aim to improve representation learning for sequential recommendation and present our efforts in building an explanation-guided contrastive learning sequential recommendation model. In particular, we first present the data sparsity issue in the sequential recommendation and the false positive problem in contrastive learning. Next, we demonstrate how to utilize explanation methods for explanation-guided augmentation to enhance positive and negative views for contrastive learning-based sequential recommendation, thereby improving the learned representations. Most sequential recommendation methods primarily focus on improving the quality of user representation. However, representation learning-based methods still suffer from several issues: 1) data sparsity; 2) difficulty adapting to unseen tasks; and 3) lack of world knowledge; 4) lack of human-style reasoning for generating explanations. To address these issues, the second part of this thesis investigates how we can build sequential recommendation models based on large language models. In particular, we introduce two new research directions for LLM-based sequential recommendation: 1) zero-shot LLM-based reasoning of recommended items and 2) few-shot LLM-based reasoning of recommended items. For zero-shot LLM-based reasoning of recommended items, we use an external module for generating candidate items to reduce the recommendation space and a 3-step prompting method for capturing user preferences and making ranked recommendations. For few-shot LLM-based reasoning of recommended items, we study what makes in-context learning work for sequential recommendation and propose incorporating multiple demonstrations into one aggregated demonstration to avoid the long input problem and improve recommendation accuracy. Both directions offers new and exciting research possibilities for using LLMs in recommender systems. LLMs are generally capable of human-style reasoning which could be used to generateexplanations for a large set of tasks. Therefore, the final part of the thesis addresses the explanation generation task and the evaluation of explanation for sequential recommendation results using LLMs. Specifically, we introduce a framework for LLM-based explanation to support automatic evaluation of an LLM's ability to generate plausible post-hoc explanations from the content filtering and collaborative filtering perspectives. Using our created benchmark data, the experiment results show that ChatGPT with appropriate prompting can be a promising explainer for recommendation tasks.
format text
author WANG, Lei
author_facet WANG, Lei
author_sort WANG, Lei
title Sequential recommendation: From representation learning to reasoning
title_short Sequential recommendation: From representation learning to reasoning
title_full Sequential recommendation: From representation learning to reasoning
title_fullStr Sequential recommendation: From representation learning to reasoning
title_full_unstemmed Sequential recommendation: From representation learning to reasoning
title_sort sequential recommendation: from representation learning to reasoning
publisher Institutional Knowledge at Singapore Management University
publishDate 2024
url https://ink.library.smu.edu.sg/etd_coll/593
https://ink.library.smu.edu.sg/context/etd_coll/article/1591/viewcontent/Thesis_Lei_SMU_Draft__3_.pdf
_version_ 1814047621148311552
spelling sg-smu-ink.etd_coll-15912024-06-19T03:31:23Z Sequential recommendation: From representation learning to reasoning WANG, Lei The recommender system is a crucial component of today's online services. It helps users navigate through an overwhelmingly large number of items and discovering those that interest them. Unlike general recommender systems, which recommend items based on the user's overall preferences, sequential recommender systems consider the order of user-item interactions. Sequential recommendations aim to predict the next item a user will interact with, given a sequence of previously interacted items, while considering the short-term and long-term dependencies among items. In this thesis, we focus on sequential recommendation methods: from representation learning to large language model (LLM)-based reasoning. On the one hand, representation learning-based sequential recommendation methods usually feed ID embeddings of interacted items into models, such as deep neural networks, to generate user representation vectors. They then rank candidate items to create a recommendation list based on the similarity between user representation vectors and candidate item vectors. On the other hand, the LLM-based reasoning approach mainly depends on the LLM's strong reasoning ability and rich world knowledge. LLM-based reasoners require carefully designed prompts and/or demonstration examples considering the task complexity and prompt length constraint. This thesis consists of three parts. In the first part, we aim to improve representation learning for sequential recommendation and present our efforts in building an explanation-guided contrastive learning sequential recommendation model. In particular, we first present the data sparsity issue in the sequential recommendation and the false positive problem in contrastive learning. Next, we demonstrate how to utilize explanation methods for explanation-guided augmentation to enhance positive and negative views for contrastive learning-based sequential recommendation, thereby improving the learned representations. Most sequential recommendation methods primarily focus on improving the quality of user representation. However, representation learning-based methods still suffer from several issues: 1) data sparsity; 2) difficulty adapting to unseen tasks; and 3) lack of world knowledge; 4) lack of human-style reasoning for generating explanations. To address these issues, the second part of this thesis investigates how we can build sequential recommendation models based on large language models. In particular, we introduce two new research directions for LLM-based sequential recommendation: 1) zero-shot LLM-based reasoning of recommended items and 2) few-shot LLM-based reasoning of recommended items. For zero-shot LLM-based reasoning of recommended items, we use an external module for generating candidate items to reduce the recommendation space and a 3-step prompting method for capturing user preferences and making ranked recommendations. For few-shot LLM-based reasoning of recommended items, we study what makes in-context learning work for sequential recommendation and propose incorporating multiple demonstrations into one aggregated demonstration to avoid the long input problem and improve recommendation accuracy. Both directions offers new and exciting research possibilities for using LLMs in recommender systems. LLMs are generally capable of human-style reasoning which could be used to generateexplanations for a large set of tasks. Therefore, the final part of the thesis addresses the explanation generation task and the evaluation of explanation for sequential recommendation results using LLMs. Specifically, we introduce a framework for LLM-based explanation to support automatic evaluation of an LLM's ability to generate plausible post-hoc explanations from the content filtering and collaborative filtering perspectives. Using our created benchmark data, the experiment results show that ChatGPT with appropriate prompting can be a promising explainer for recommendation tasks. 2024-04-01T07:00:00Z text application/pdf https://ink.library.smu.edu.sg/etd_coll/593 https://ink.library.smu.edu.sg/context/etd_coll/article/1591/viewcontent/Thesis_Lei_SMU_Draft__3_.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Dissertations and Theses Collection (Open Access) eng Institutional Knowledge at Singapore Management University Sequential Recommendation Large Language Model Contrastive Learning Explanation Computer Sciences