Thoughts to target : enhance planning for target-driven conversation

In conversational AI, large-scale models excel in various tasks but struggle with target-driven conversation planning. Current methods, such as chain-of-thought reasoning and tree-search policy learning techniques, either neglect plan rationality or require extensive human simulation procedures. Add...

Full description

Saved in:
Bibliographic Details
Main Authors: ZHENG, Zhonghua, LIAO, Lizi, DENG, Yang, LIM, Ee-peng, HUANG, Minlie, NIE, Liqiang
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2024
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/9564
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
id sg-smu-ink.sis_research-10564
record_format dspace
spelling sg-smu-ink.sis_research-105642024-11-15T06:54:03Z Thoughts to target : enhance planning for target-driven conversation ZHENG, Zhonghua LIAO, Lizi DENG, Yang LIM, Ee-peng HUANG, Minlie NIE, Liqiang In conversational AI, large-scale models excel in various tasks but struggle with target-driven conversation planning. Current methods, such as chain-of-thought reasoning and tree-search policy learning techniques, either neglect plan rationality or require extensive human simulation procedures. Addressing this, we propose a novel two-stage framework, named EnPL, to improve the LLMs’ capability in planning conversations towards designated targets, including (1) distilling natural language plans from target-driven conversation corpus and (2) generating new plans with demonstration-guided in-context learning. Specifically, we first propose a filter approach to distill a high-quality plan dataset, ConvPlan1. With the aid of corresponding conversational data and support from relevant knowledge bases, we validate the quality and rationality of these plans. Then, these plans are leveraged to help guide LLMs to further plan for new targets. Empirical results demonstrate that our method significantly improves the planning ability of LLMs, especially in target-driven conversations. Furthermore, EnPL is demonstrated to be quite effective in collecting target-driven conversation datasets and enhancing response generation, paving the way for constructing extensive target-driven conversational models. 2024-11-09T08:00:00Z text https://ink.library.smu.edu.sg/sis_research/9564 Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Conversational AI Conversation planning Large Language Models LLMS Artificial Intelligence and Robotics Computer Sciences
institution Singapore Management University
building SMU Libraries
continent Asia
country Singapore
Singapore
content_provider SMU Libraries
collection InK@SMU
language English
topic Conversational AI
Conversation planning
Large Language Models
LLMS
Artificial Intelligence and Robotics
Computer Sciences
spellingShingle Conversational AI
Conversation planning
Large Language Models
LLMS
Artificial Intelligence and Robotics
Computer Sciences
ZHENG, Zhonghua
LIAO, Lizi
DENG, Yang
LIM, Ee-peng
HUANG, Minlie
NIE, Liqiang
Thoughts to target : enhance planning for target-driven conversation
description In conversational AI, large-scale models excel in various tasks but struggle with target-driven conversation planning. Current methods, such as chain-of-thought reasoning and tree-search policy learning techniques, either neglect plan rationality or require extensive human simulation procedures. Addressing this, we propose a novel two-stage framework, named EnPL, to improve the LLMs’ capability in planning conversations towards designated targets, including (1) distilling natural language plans from target-driven conversation corpus and (2) generating new plans with demonstration-guided in-context learning. Specifically, we first propose a filter approach to distill a high-quality plan dataset, ConvPlan1. With the aid of corresponding conversational data and support from relevant knowledge bases, we validate the quality and rationality of these plans. Then, these plans are leveraged to help guide LLMs to further plan for new targets. Empirical results demonstrate that our method significantly improves the planning ability of LLMs, especially in target-driven conversations. Furthermore, EnPL is demonstrated to be quite effective in collecting target-driven conversation datasets and enhancing response generation, paving the way for constructing extensive target-driven conversational models.
format text
author ZHENG, Zhonghua
LIAO, Lizi
DENG, Yang
LIM, Ee-peng
HUANG, Minlie
NIE, Liqiang
author_facet ZHENG, Zhonghua
LIAO, Lizi
DENG, Yang
LIM, Ee-peng
HUANG, Minlie
NIE, Liqiang
author_sort ZHENG, Zhonghua
title Thoughts to target : enhance planning for target-driven conversation
title_short Thoughts to target : enhance planning for target-driven conversation
title_full Thoughts to target : enhance planning for target-driven conversation
title_fullStr Thoughts to target : enhance planning for target-driven conversation
title_full_unstemmed Thoughts to target : enhance planning for target-driven conversation
title_sort thoughts to target : enhance planning for target-driven conversation
publisher Institutional Knowledge at Singapore Management University
publishDate 2024
url https://ink.library.smu.edu.sg/sis_research/9564
_version_ 1816859134068260864