The whole is better than the sum : Using aggregated demonstrations in in-context learning for sequential recommendation

Large language models (LLMs) have shown excellent performance on various NLP tasks. To use LLMs as strong sequential recommenders, we explore the in-context learning approach to sequential recommendation. We investigate the effects of instruction format, task consistency, demonstration selection, an...

Full description

Saved in:
Bibliographic Details
Main Authors: LEI, Wang, LIM, Ee-Peng
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2024
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/9786
https://ink.library.smu.edu.sg/context/sis_research/article/10786/viewcontent/2024.findings_naacl.56.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
id sg-smu-ink.sis_research-10786
record_format dspace
spelling sg-smu-ink.sis_research-107862024-12-16T01:58:51Z The whole is better than the sum : Using aggregated demonstrations in in-context learning for sequential recommendation LEI, Wang LIM, Ee-Peng Large language models (LLMs) have shown excellent performance on various NLP tasks. To use LLMs as strong sequential recommenders, we explore the in-context learning approach to sequential recommendation. We investigate the effects of instruction format, task consistency, demonstration selection, and number of demonstrations. As increasing the number of demonstrations in ICL does not improve accuracy despite using a long prompt, we propose a novel method called LLMSRec-Syn that incorporates multiple demonstration users into one aggregated demonstration. Our experiments on three recommendation datasets show that LLMSRec-Syn outperforms state-of-the-art LLM-based sequential recommendation methods. In some cases, LLMSRec-Syn can perform on par with or even better than supervised learning methods. Our code is publicly available at https://github.com/demoleiwang/LLMSRec_Syn. 2024-06-01T07:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/9786 info:doi/10.18653/v1/2024.findings-naacl.56 https://ink.library.smu.edu.sg/context/sis_research/article/10786/viewcontent/2024.findings_naacl.56.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Large language models LLMs Sequential recommendation Artificial Intelligence and Robotics Computer Sciences
institution Singapore Management University
building SMU Libraries
continent Asia
country Singapore
Singapore
content_provider SMU Libraries
collection InK@SMU
language English
topic Large language models
LLMs
Sequential recommendation
Artificial Intelligence and Robotics
Computer Sciences
spellingShingle Large language models
LLMs
Sequential recommendation
Artificial Intelligence and Robotics
Computer Sciences
LEI, Wang
LIM, Ee-Peng
The whole is better than the sum : Using aggregated demonstrations in in-context learning for sequential recommendation
description Large language models (LLMs) have shown excellent performance on various NLP tasks. To use LLMs as strong sequential recommenders, we explore the in-context learning approach to sequential recommendation. We investigate the effects of instruction format, task consistency, demonstration selection, and number of demonstrations. As increasing the number of demonstrations in ICL does not improve accuracy despite using a long prompt, we propose a novel method called LLMSRec-Syn that incorporates multiple demonstration users into one aggregated demonstration. Our experiments on three recommendation datasets show that LLMSRec-Syn outperforms state-of-the-art LLM-based sequential recommendation methods. In some cases, LLMSRec-Syn can perform on par with or even better than supervised learning methods. Our code is publicly available at https://github.com/demoleiwang/LLMSRec_Syn.
format text
author LEI, Wang
LIM, Ee-Peng
author_facet LEI, Wang
LIM, Ee-Peng
author_sort LEI, Wang
title The whole is better than the sum : Using aggregated demonstrations in in-context learning for sequential recommendation
title_short The whole is better than the sum : Using aggregated demonstrations in in-context learning for sequential recommendation
title_full The whole is better than the sum : Using aggregated demonstrations in in-context learning for sequential recommendation
title_fullStr The whole is better than the sum : Using aggregated demonstrations in in-context learning for sequential recommendation
title_full_unstemmed The whole is better than the sum : Using aggregated demonstrations in in-context learning for sequential recommendation
title_sort whole is better than the sum : using aggregated demonstrations in in-context learning for sequential recommendation
publisher Institutional Knowledge at Singapore Management University
publishDate 2024
url https://ink.library.smu.edu.sg/sis_research/9786
https://ink.library.smu.edu.sg/context/sis_research/article/10786/viewcontent/2024.findings_naacl.56.pdf
_version_ 1819113138460557312