LLM hallucination study

Large Language Models (LLMs) exhibit impressive generative capabilities but often produce hallucinations—outputs that are factually incorrect, misleading, or entirely fabricated. These hallucinations pose significant challenges in high-stakes applications such as medical diagnosis, legal reaso...

全面介紹

Saved in:
書目詳細資料
主要作者: Potdar, Prateek Anish
其他作者: Jun Zhao
格式: Final Year Project
語言:English
出版: Nanyang Technological University 2025
主題:
LLM
RAG
在線閱讀:https://hdl.handle.net/10356/183825
標簽: 添加標簽
沒有標簽, 成為第一個標記此記錄!