CLAMBER: A benchmark of identifying and clarifying ambiguous information needs in large language models

Large language models (LLMs) are increasingly used to meet user information needs, but their effectiveness in dealing with user queries that contain various types of ambiguity remains unknown, ultimately risking user trust and satisfaction. To this end, we introduce CLAMBER, a benchmark for evaluati...

Full description

Saved in:
Bibliographic Details
Main Authors: ZHANG, Tong, QIN, Peixin, DENG, Yang, HUANG, Chen, LEI, Wenqiang, LIU, Junhong, JIN, Dingnan, LIANG, Hongru, CHUA, Tat-Seng
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2024
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/9238
https://ink.library.smu.edu.sg/context/sis_research/article/10238/viewcontent/2024.acl_long.578.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
id sg-smu-ink.sis_research-10238
record_format dspace
spelling sg-smu-ink.sis_research-102382024-09-02T06:48:02Z CLAMBER: A benchmark of identifying and clarifying ambiguous information needs in large language models ZHANG, Tong QIN, Peixin DENG, Yang HUANG, Chen LEI, Wenqiang LIU, Junhong JIN, Dingnan LIANG, Hongru CHUA, Tat-Seng Large language models (LLMs) are increasingly used to meet user information needs, but their effectiveness in dealing with user queries that contain various types of ambiguity remains unknown, ultimately risking user trust and satisfaction. To this end, we introduce CLAMBER, a benchmark for evaluating LLMs using a well-organized taxonomy. Building upon the taxonomy, we construct 12K high-quality data to assess the strengths, weaknesses, and potential risks of various off-the-shelf LLMs.Our findings indicate the limited practical utility of current LLMs in identifying and clarifying ambiguous user queries, even enhanced by chain-of-thought (CoT) and few-shot prompting. These techniques may result in overconfidence in LLMs and yield only marginal enhancements in identifying ambiguity. Furthermore, current LLMs fall short in generating high-quality clarifying questions due to a lack of conflict resolution and inaccurate utilization of inherent knowledge.In this paper, CLAMBER presents a guidance and promotes further research on proactive and trustworthy LLMs. 2024-08-01T07:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/9238 https://ink.library.smu.edu.sg/context/sis_research/article/10238/viewcontent/2024.acl_long.578.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Databases and Information Systems Programming Languages and Compilers
institution Singapore Management University
building SMU Libraries
continent Asia
country Singapore
Singapore
content_provider SMU Libraries
collection InK@SMU
language English
topic Databases and Information Systems
Programming Languages and Compilers
spellingShingle Databases and Information Systems
Programming Languages and Compilers
ZHANG, Tong
QIN, Peixin
DENG, Yang
HUANG, Chen
LEI, Wenqiang
LIU, Junhong
JIN, Dingnan
LIANG, Hongru
CHUA, Tat-Seng
CLAMBER: A benchmark of identifying and clarifying ambiguous information needs in large language models
description Large language models (LLMs) are increasingly used to meet user information needs, but their effectiveness in dealing with user queries that contain various types of ambiguity remains unknown, ultimately risking user trust and satisfaction. To this end, we introduce CLAMBER, a benchmark for evaluating LLMs using a well-organized taxonomy. Building upon the taxonomy, we construct 12K high-quality data to assess the strengths, weaknesses, and potential risks of various off-the-shelf LLMs.Our findings indicate the limited practical utility of current LLMs in identifying and clarifying ambiguous user queries, even enhanced by chain-of-thought (CoT) and few-shot prompting. These techniques may result in overconfidence in LLMs and yield only marginal enhancements in identifying ambiguity. Furthermore, current LLMs fall short in generating high-quality clarifying questions due to a lack of conflict resolution and inaccurate utilization of inherent knowledge.In this paper, CLAMBER presents a guidance and promotes further research on proactive and trustworthy LLMs.
format text
author ZHANG, Tong
QIN, Peixin
DENG, Yang
HUANG, Chen
LEI, Wenqiang
LIU, Junhong
JIN, Dingnan
LIANG, Hongru
CHUA, Tat-Seng
author_facet ZHANG, Tong
QIN, Peixin
DENG, Yang
HUANG, Chen
LEI, Wenqiang
LIU, Junhong
JIN, Dingnan
LIANG, Hongru
CHUA, Tat-Seng
author_sort ZHANG, Tong
title CLAMBER: A benchmark of identifying and clarifying ambiguous information needs in large language models
title_short CLAMBER: A benchmark of identifying and clarifying ambiguous information needs in large language models
title_full CLAMBER: A benchmark of identifying and clarifying ambiguous information needs in large language models
title_fullStr CLAMBER: A benchmark of identifying and clarifying ambiguous information needs in large language models
title_full_unstemmed CLAMBER: A benchmark of identifying and clarifying ambiguous information needs in large language models
title_sort clamber: a benchmark of identifying and clarifying ambiguous information needs in large language models
publisher Institutional Knowledge at Singapore Management University
publishDate 2024
url https://ink.library.smu.edu.sg/sis_research/9238
https://ink.library.smu.edu.sg/context/sis_research/article/10238/viewcontent/2024.acl_long.578.pdf
_version_ 1814047841458323456