Prompting and evaluating large language models for proactive dialogues: Clarification, target-guided, and non-collaboration

Conversational systems based on Large Language Models (LLMs), such as ChatGPT, show exceptional proficiency in context understanding and response generation. However, they still possess limitations, such as failing to ask clarifying questions to ambiguous queries or refuse users' unreasonable r...

Full description

Saved in:
Bibliographic Details
Main Authors: DENG, Yang, LIAO, Lizi, CHEN, Liang, WANG, Hongru, LEI, Wenqiang, CHUA, Tat-Seng
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2023
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/9116
https://ink.library.smu.edu.sg/context/sis_research/article/10119/viewcontent/Prompting.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
id sg-smu-ink.sis_research-10119
record_format dspace
spelling sg-smu-ink.sis_research-101192024-08-01T14:39:58Z Prompting and evaluating large language models for proactive dialogues: Clarification, target-guided, and non-collaboration DENG, Yang LIAO, Lizi CHEN, Liang WANG, Hongru LEI, Wenqiang CHUA, Tat-Seng Conversational systems based on Large Language Models (LLMs), such as ChatGPT, show exceptional proficiency in context understanding and response generation. However, they still possess limitations, such as failing to ask clarifying questions to ambiguous queries or refuse users' unreasonable requests, both of which are considered as key aspects of a conversational agent's proactivity. This raises the question of whether LLM-based conversational systems are equipped to handle proactive dialogue problems. In this work, we conduct a comprehensive analysis of LLM-based conversational systems, specifically focusing on three key aspects of proactive dialogues: clarification, target-guided, and non-collaborative dialogues. To trigger the proactivity of LLMs, we propose the Proactive Chain-of-Thought prompting scheme, which augments LLMs with the goal planning capability over descriptive reasoning chains. Empirical findings are discussed to promote future studies on LLM-based proactive dialogue systems. 2023-12-01T08:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/9116 info:doi/10.18653/v1/2023.findings-emnlp.711 https://ink.library.smu.edu.sg/context/sis_research/article/10119/viewcontent/Prompting.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Comprehensive analysis Conversational agents Conversational systems Empirical findings In contexts Language model Model-based OPC Planning capability Proactivity Response generation Databases and Information Systems Information Security
institution Singapore Management University
building SMU Libraries
continent Asia
country Singapore
Singapore
content_provider SMU Libraries
collection InK@SMU
language English
topic Comprehensive analysis
Conversational agents
Conversational systems
Empirical findings
In contexts
Language model
Model-based OPC
Planning capability
Proactivity
Response generation
Databases and Information Systems
Information Security
spellingShingle Comprehensive analysis
Conversational agents
Conversational systems
Empirical findings
In contexts
Language model
Model-based OPC
Planning capability
Proactivity
Response generation
Databases and Information Systems
Information Security
DENG, Yang
LIAO, Lizi
CHEN, Liang
WANG, Hongru
LEI, Wenqiang
CHUA, Tat-Seng
Prompting and evaluating large language models for proactive dialogues: Clarification, target-guided, and non-collaboration
description Conversational systems based on Large Language Models (LLMs), such as ChatGPT, show exceptional proficiency in context understanding and response generation. However, they still possess limitations, such as failing to ask clarifying questions to ambiguous queries or refuse users' unreasonable requests, both of which are considered as key aspects of a conversational agent's proactivity. This raises the question of whether LLM-based conversational systems are equipped to handle proactive dialogue problems. In this work, we conduct a comprehensive analysis of LLM-based conversational systems, specifically focusing on three key aspects of proactive dialogues: clarification, target-guided, and non-collaborative dialogues. To trigger the proactivity of LLMs, we propose the Proactive Chain-of-Thought prompting scheme, which augments LLMs with the goal planning capability over descriptive reasoning chains. Empirical findings are discussed to promote future studies on LLM-based proactive dialogue systems.
format text
author DENG, Yang
LIAO, Lizi
CHEN, Liang
WANG, Hongru
LEI, Wenqiang
CHUA, Tat-Seng
author_facet DENG, Yang
LIAO, Lizi
CHEN, Liang
WANG, Hongru
LEI, Wenqiang
CHUA, Tat-Seng
author_sort DENG, Yang
title Prompting and evaluating large language models for proactive dialogues: Clarification, target-guided, and non-collaboration
title_short Prompting and evaluating large language models for proactive dialogues: Clarification, target-guided, and non-collaboration
title_full Prompting and evaluating large language models for proactive dialogues: Clarification, target-guided, and non-collaboration
title_fullStr Prompting and evaluating large language models for proactive dialogues: Clarification, target-guided, and non-collaboration
title_full_unstemmed Prompting and evaluating large language models for proactive dialogues: Clarification, target-guided, and non-collaboration
title_sort prompting and evaluating large language models for proactive dialogues: clarification, target-guided, and non-collaboration
publisher Institutional Knowledge at Singapore Management University
publishDate 2023
url https://ink.library.smu.edu.sg/sis_research/9116
https://ink.library.smu.edu.sg/context/sis_research/article/10119/viewcontent/Prompting.pdf
_version_ 1814047746431123456