Hallucination detection: Robustly discerning reliable answers in Large Language Models
Large language models (LLMs) have gained widespread adoption in various natural language processing tasks, including question answering and dialogue systems. However, a major drawback of LLMs is the issue of hallucination, where they generate unfaithful or inconsistent content that deviates from the...
Saved in:
Main Authors: | CHEN, Yuyuan, FU, Qiang, YUAN, Yichen, WEN, Zhihao, FAN, Ge, LIU, Dayiheng, ZHANG, Dongmei, LI, Zhixu, XIAO, Yanghua |
---|---|
格式: | text |
語言: | English |
出版: |
Institutional Knowledge at Singapore Management University
2023
|
主題: | |
在線閱讀: | https://ink.library.smu.edu.sg/sis_research/8464 https://ink.library.smu.edu.sg/context/sis_research/article/9467/viewcontent/3583780.3614905_pv.pdf |
標簽: |
添加標簽
沒有標簽, 成為第一個標記此記錄!
|
相似書籍
-
Mitigating fine-grained hallucination by fine-tuning large vision-language models with caption rewrites
由: WANG, Lei, et al.
出版: (2024) -
LLM hallucination study
由: Potdar, Prateek Anish
出版: (2025) -
Mitigating style-image hallucination in large vision language models
由: He, Guoshun
出版: (2025) -
More trustworthy generative AI through hallucination reduction
由: He, Guoshun
出版: (2024) -
Reducing LLM hallucinations: exploring the efficacy of temperature adjustment through empirical examination and analysis
由: Tan, Max Zheyuan
出版: (2024)