LLM hallucination study
Large Language Models (LLMs) exhibit impressive generative capabilities but often produce hallucinations—outputs that are factually incorrect, misleading, or entirely fabricated. These hallucinations pose significant challenges in high-stakes applications such as medical diagnosis, legal reaso...
Saved in:
主要作者: | |
---|---|
其他作者: | |
格式: | Final Year Project |
語言: | English |
出版: |
Nanyang Technological University
2025
|
主題: | |
在線閱讀: | https://hdl.handle.net/10356/183825 |
標簽: |
添加標簽
沒有標簽, 成為第一個標記此記錄!
|