Just adjust one prompt: Enhancing in-context dialogue scoring via constructing the optimal subgraph of demonstrations and prompts
The use of modern Large Language Models (LLMs) as chatbots still has some problems such as hallucinations and lack of empathy. Identifying these issues can help improve chatbot performance. The community has been continually iterating on reference-free dialogue evaluation methods based on large lang...
Saved in:
Main Authors: | PU, Jiashu, CHENG, Ling, FAN, Lu, LV, Tangjie, ZHANG, Rongsheng |
---|---|
Format: | text |
Language: | English |
Published: |
Institutional Knowledge at Singapore Management University
2023
|
Subjects: | |
Online Access: | https://ink.library.smu.edu.sg/sis_research/8751 https://ink.library.smu.edu.sg/context/sis_research/article/9754/viewcontent/2023.emnlp_main.590_pvoa.pdf |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Singapore Management University |
Language: | English |
Similar Items
-
SELF-SUPERVISED MODELING FOR OPEN-DOMAIN DIALOGUE EVALUATION
by: ZHANG CHEN
Published: (2023) -
Balancing visual context understanding in dialogue for image retrieval
by: WEI, Zhaohui, et al.
Published: (2024) -
Recent advances in deep learning based dialogue systems: a systematic survey
by: Ni, Jinjie, et al.
Published: (2023) -
Design and implementation of performance-based assessment with metacognitive prompts in mathematics
by: Cerrado, Pamela Mae Y.
Published: (2022) -
Prompting and evaluating large language models for proactive dialogues: Clarification, target-guided, and non-collaboration
by: DENG, Yang, et al.
Published: (2023)