Planning like human : A dual-process framework for dialogue planning

In proactive dialogue, the challenge lies not just in generating responses but in steering conversations toward predetermined goals, a task where Large Language Models (LLMs) typically struggle due to their reactive nature. Traditional approaches to enhance dialogue planning in LLMs, ranging from el...

Full description

Saved in:
Bibliographic Details
Main Authors: HE, Tao, LIAO, Lizi, CAO, Yixin, LIU, Yuanxing, LIU, Ming, CHEN, Zerui, QIN, Bing
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2024
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/9696
https://ink.library.smu.edu.sg/context/sis_research/article/10696/viewcontent/2024.acl_long.262.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
id sg-smu-ink.sis_research-10696
record_format dspace
spelling sg-smu-ink.sis_research-106962024-11-28T09:04:39Z Planning like human : A dual-process framework for dialogue planning HE, Tao LIAO, Lizi CAO, Yixin LIU, Yuanxing LIU, Ming CHEN, Zerui QIN, Bing In proactive dialogue, the challenge lies not just in generating responses but in steering conversations toward predetermined goals, a task where Large Language Models (LLMs) typically struggle due to their reactive nature. Traditional approaches to enhance dialogue planning in LLMs, ranging from elaborate prompt engineering to the integration of policy networks, either face efficiency issues or deliver suboptimal performance. Inspired by the dual-process theory in psychology, which identifies two distinct modes of thinking—intuitive (fast) and analytical (slow), we propose the Dual-Process Dialogue Planning (DPDP) framework. DPDP embodies this theory through two complementary planning systems: an instinctive policy model for familiar contexts and a deliberative Monte Carlo Tree Search (MCTS) mechanism for complex, novel scenarios. This dual strategy is further coupled with a novel two-stage training regimen: offline Reinforcement Learning for robust initial policy model formation followed by MCTS-enhanced on-the-fly learning, which ensures a dynamic balance between efficiency and strategic depth. Our empirical evaluations across diverse dialogue tasks affirm DPDP’s superiority in achieving both high-quality dialogues and operational efficiency, outpacing existing methods. 2024-08-01T07:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/9696 info:doi/10.18653/v1/2024.acl-long.262 https://ink.library.smu.edu.sg/context/sis_research/article/10696/viewcontent/2024.acl_long.262.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Large Language Models LLMs Dual-Process Dialogue Planning framework Natural language processing Artificial Intelligence and Robotics Computer Sciences
institution Singapore Management University
building SMU Libraries
continent Asia
country Singapore
Singapore
content_provider SMU Libraries
collection InK@SMU
language English
topic Large Language Models
LLMs
Dual-Process Dialogue Planning framework
Natural language processing
Artificial Intelligence and Robotics
Computer Sciences
spellingShingle Large Language Models
LLMs
Dual-Process Dialogue Planning framework
Natural language processing
Artificial Intelligence and Robotics
Computer Sciences
HE, Tao
LIAO, Lizi
CAO, Yixin
LIU, Yuanxing
LIU, Ming
CHEN, Zerui
QIN, Bing
Planning like human : A dual-process framework for dialogue planning
description In proactive dialogue, the challenge lies not just in generating responses but in steering conversations toward predetermined goals, a task where Large Language Models (LLMs) typically struggle due to their reactive nature. Traditional approaches to enhance dialogue planning in LLMs, ranging from elaborate prompt engineering to the integration of policy networks, either face efficiency issues or deliver suboptimal performance. Inspired by the dual-process theory in psychology, which identifies two distinct modes of thinking—intuitive (fast) and analytical (slow), we propose the Dual-Process Dialogue Planning (DPDP) framework. DPDP embodies this theory through two complementary planning systems: an instinctive policy model for familiar contexts and a deliberative Monte Carlo Tree Search (MCTS) mechanism for complex, novel scenarios. This dual strategy is further coupled with a novel two-stage training regimen: offline Reinforcement Learning for robust initial policy model formation followed by MCTS-enhanced on-the-fly learning, which ensures a dynamic balance between efficiency and strategic depth. Our empirical evaluations across diverse dialogue tasks affirm DPDP’s superiority in achieving both high-quality dialogues and operational efficiency, outpacing existing methods.
format text
author HE, Tao
LIAO, Lizi
CAO, Yixin
LIU, Yuanxing
LIU, Ming
CHEN, Zerui
QIN, Bing
author_facet HE, Tao
LIAO, Lizi
CAO, Yixin
LIU, Yuanxing
LIU, Ming
CHEN, Zerui
QIN, Bing
author_sort HE, Tao
title Planning like human : A dual-process framework for dialogue planning
title_short Planning like human : A dual-process framework for dialogue planning
title_full Planning like human : A dual-process framework for dialogue planning
title_fullStr Planning like human : A dual-process framework for dialogue planning
title_full_unstemmed Planning like human : A dual-process framework for dialogue planning
title_sort planning like human : a dual-process framework for dialogue planning
publisher Institutional Knowledge at Singapore Management University
publishDate 2024
url https://ink.library.smu.edu.sg/sis_research/9696
https://ink.library.smu.edu.sg/context/sis_research/article/10696/viewcontent/2024.acl_long.262.pdf
_version_ 1819113105639079936