The whole is better than the sum : Using aggregated demonstrations in in-context learning for sequential recommendation
Large language models (LLMs) have shown excellent performance on various NLP tasks. To use LLMs as strong sequential recommenders, we explore the in-context learning approach to sequential recommendation. We investigate the effects of instruction format, task consistency, demonstration selection, an...
Saved in:
Main Authors: | LEI, Wang, LIM, Ee-Peng |
---|---|
Format: | text |
Language: | English |
Published: |
Institutional Knowledge at Singapore Management University
2024
|
Subjects: | |
Online Access: | https://ink.library.smu.edu.sg/sis_research/9786 https://ink.library.smu.edu.sg/context/sis_research/article/10786/viewcontent/2024.findings_naacl.56.pdf |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Singapore Management University |
Language: | English |
Similar Items
-
Sequential recommendation: From representation learning to reasoning
by: WANG, Lei
Published: (2024) -
Experience as source for anticipation and planning : Experiential policy learning for target-driven recommendation dialogues
by: DAO, Quang Huy, et al.
Published: (2024) -
Temporal attention graph-optimized networks for sequential recommendation
by: Pathak, Siddhant
Published: (2024) -
Explanation guided contrastive learning for sequential recommendation
by: WANG, Lei, et al.
Published: (2022) -
Memory bank augmented long-tail sequential recommendation
by: Hu, Yidan, et al.
Published: (2023)