Export Ready — 

LLM hallucination study

Large Language Models (LLMs) exhibit impressive generative capabilities but often produce hallucinations—outputs that are factually incorrect, misleading, or entirely fabricated. These hallucinations pose significant challenges in high-stakes applications such as medical diagnosis, legal reaso...

Full description

Saved in:
Bibliographic Details
Main Author: Potdar, Prateek Anish
Other Authors: Jun Zhao
Format: Final Year Project
Language:English
Published: Nanyang Technological University 2025
Subjects:
LLM
RAG
Online Access:https://hdl.handle.net/10356/183825
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English