Reducing LLM hallucinations: exploring the efficacy of temperature adjustment through empirical examination and analysis
Foundation models are gaining widespread prominence both in industrial applications and among individual users. In the tech landscape today, prompt engineers play a crucial role by crafting industry-standard prompts, empowering companies to enhance productivity, engage with customers, and automate v...
Saved in:
Main Author: | Tan, Max Zheyuan |
---|---|
Other Authors: | Jun Zhao |
Format: | Final Year Project |
Language: | English |
Published: |
Nanyang Technological University
2024
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/174789 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Similar Items
-
More trustworthy generative AI through hallucination reduction
by: He, Guoshun
Published: (2024) -
Joint face hallucination and deblurring via structure generation and detail enhancement
by: SONG, Yibing, et al.
Published: (2019) -
Mitigating fine-grained hallucination by fine-tuning large vision-language models with caption rewrites
by: WANG, Lei, et al.
Published: (2024) -
Delayed-onset hypnopompic visual hallucinations 20 years after initiation of propranolol therapy for systemic hypertension: a case report
by: Au Eong, Denise T. M., et al.
Published: (2024) -
Hallucination detection: Robustly discerning reliable answers in Large Language Models
by: CHEN, Yuyuan, et al.
Published: (2023)