Hallucination detection: Robustly discerning reliable answers in Large Language Models
Large language models (LLMs) have gained widespread adoption in various natural language processing tasks, including question answering and dialogue systems. However, a major drawback of LLMs is the issue of hallucination, where they generate unfaithful or inconsistent content that deviates from the...
Saved in:
Main Authors: | CHEN, Yuyuan, FU, Qiang, YUAN, Yichen, WEN, Zhihao, FAN, Ge, LIU, Dayiheng, ZHANG, Dongmei, LI, Zhixu, XIAO, Yanghua |
---|---|
Format: | text |
Language: | English |
Published: |
Institutional Knowledge at Singapore Management University
2023
|
Subjects: | |
Online Access: | https://ink.library.smu.edu.sg/sis_research/8464 https://ink.library.smu.edu.sg/context/sis_research/article/9467/viewcontent/3583780.3614905_pv.pdf |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Singapore Management University |
Language: | English |
Similar Items
-
Mitigating fine-grained hallucination by fine-tuning large vision-language models with caption rewrites
by: WANG, Lei, et al.
Published: (2024) -
More trustworthy generative AI through hallucination reduction
by: He, Guoshun
Published: (2024) -
Reducing LLM hallucinations: exploring the efficacy of temperature adjustment through empirical examination and analysis
by: Tan, Max Zheyuan
Published: (2024) -
Answers or no answers : studying question answerability in stack overflow
by: Chua, Alton Yeow Kuan, et al.
Published: (2020) -
Quality-aware collaborative Question Answering: Methods and evaluation
by: SURYANTO, Maggy Anastasia, et al.
Published: (2009)