Plug-and-play policy planner for large language model powered dialogue agents

Proactive dialogues serve as a practical yet challenging dialogue problem in the era of large language models (LLMs), where the dialogue policy planning is the key to improving the proactivity of LLMs. Most existing studies enable the dialogue policy planning of LLMs using various prompting schemes...

Full description

Saved in:
Bibliographic Details
Main Authors: DENG, Yang, ZHANG, Wenxuan, LAM, Wai, NG, See-Kiong, CHUA, Tat-Seng
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2024
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/9115
https://ink.library.smu.edu.sg/context/sis_research/article/10118/viewcontent/2311.00262v2.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
id sg-smu-ink.sis_research-10118
record_format dspace
spelling sg-smu-ink.sis_research-101182024-08-01T14:42:30Z Plug-and-play policy planner for large language model powered dialogue agents DENG, Yang ZHANG, Wenxuan LAM, Wai NG, See-Kiong CHUA, Tat-Seng Proactive dialogues serve as a practical yet challenging dialogue problem in the era of large language models (LLMs), where the dialogue policy planning is the key to improving the proactivity of LLMs. Most existing studies enable the dialogue policy planning of LLMs using various prompting schemes or iteratively enhance this capability in handling the given case with verbal AI feedback. However, these approaches are either bounded by the policy planning capability of the frozen LLMs or hard to be transferred to new cases. In this work, we introduce a new dialogue policy planning paradigm to strategize LLMs for proactive dialogue problems with a tunable language model plug-in as a plug-and-play dialogue policy planner, named PPDPP. Specifically, we develop a novel training framework to facilitate supervised fine-tuning over available human-annotated data as well as reinforcement learning from goal-oriented AI feedback with dynamic interaction data collected by the LLM-based self-play simulation. In this manner, the LLM-powered dialogue agent can not only be generalized to different cases after the training, but also be applicable to different applications by just substituting the learned plug-in. In addition, we propose to evaluate the policy planning capability of dialogue systems under the interactive setting. Experimental results demonstrate that PPDPP consistently and substantially outperforms existing approaches on three different proactive dialogue applications, including negotiation, emotional support, and tutoring dialogues. 2024-05-01T07:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/9115 https://ink.library.smu.edu.sg/context/sis_research/article/10118/viewcontent/2311.00262v2.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Databases and Information Systems Programming Languages and Compilers
institution Singapore Management University
building SMU Libraries
continent Asia
country Singapore
Singapore
content_provider SMU Libraries
collection InK@SMU
language English
topic Databases and Information Systems
Programming Languages and Compilers
spellingShingle Databases and Information Systems
Programming Languages and Compilers
DENG, Yang
ZHANG, Wenxuan
LAM, Wai
NG, See-Kiong
CHUA, Tat-Seng
Plug-and-play policy planner for large language model powered dialogue agents
description Proactive dialogues serve as a practical yet challenging dialogue problem in the era of large language models (LLMs), where the dialogue policy planning is the key to improving the proactivity of LLMs. Most existing studies enable the dialogue policy planning of LLMs using various prompting schemes or iteratively enhance this capability in handling the given case with verbal AI feedback. However, these approaches are either bounded by the policy planning capability of the frozen LLMs or hard to be transferred to new cases. In this work, we introduce a new dialogue policy planning paradigm to strategize LLMs for proactive dialogue problems with a tunable language model plug-in as a plug-and-play dialogue policy planner, named PPDPP. Specifically, we develop a novel training framework to facilitate supervised fine-tuning over available human-annotated data as well as reinforcement learning from goal-oriented AI feedback with dynamic interaction data collected by the LLM-based self-play simulation. In this manner, the LLM-powered dialogue agent can not only be generalized to different cases after the training, but also be applicable to different applications by just substituting the learned plug-in. In addition, we propose to evaluate the policy planning capability of dialogue systems under the interactive setting. Experimental results demonstrate that PPDPP consistently and substantially outperforms existing approaches on three different proactive dialogue applications, including negotiation, emotional support, and tutoring dialogues.
format text
author DENG, Yang
ZHANG, Wenxuan
LAM, Wai
NG, See-Kiong
CHUA, Tat-Seng
author_facet DENG, Yang
ZHANG, Wenxuan
LAM, Wai
NG, See-Kiong
CHUA, Tat-Seng
author_sort DENG, Yang
title Plug-and-play policy planner for large language model powered dialogue agents
title_short Plug-and-play policy planner for large language model powered dialogue agents
title_full Plug-and-play policy planner for large language model powered dialogue agents
title_fullStr Plug-and-play policy planner for large language model powered dialogue agents
title_full_unstemmed Plug-and-play policy planner for large language model powered dialogue agents
title_sort plug-and-play policy planner for large language model powered dialogue agents
publisher Institutional Knowledge at Singapore Management University
publishDate 2024
url https://ink.library.smu.edu.sg/sis_research/9115
https://ink.library.smu.edu.sg/context/sis_research/article/10118/viewcontent/2311.00262v2.pdf
_version_ 1814047746130182144