Reducing LLM hallucinations: exploring the efficacy of temperature adjustment through empirical examination and analysis
Foundation models are gaining widespread prominence both in industrial applications and among individual users. In the tech landscape today, prompt engineers play a crucial role by crafting industry-standard prompts, empowering companies to enhance productivity, engage with customers, and automate v...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Final Year Project |
Language: | English |
Published: |
Nanyang Technological University
2024
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/174789 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
id |
sg-ntu-dr.10356-174789 |
---|---|
record_format |
dspace |
spelling |
sg-ntu-dr.10356-1747892024-05-17T15:38:05Z Reducing LLM hallucinations: exploring the efficacy of temperature adjustment through empirical examination and analysis Tan, Max Zheyuan Jun Zhao School of Computer Science and Engineering junzhao@ntu.edu.sg Computer and Information Science Hallucination Foundation model Foundation models are gaining widespread prominence both in industrial applications and among individual users. In the tech landscape today, prompt engineers play a crucial role by crafting industry-standard prompts, empowering companies to enhance productivity, engage with customers, and automate various tasks. At the same time, individuals leverage publicly available foundation models such as OpenAI's ChatGPT for everyday activities like summarising text or composing emails. However, a significant challenge to the reliability and usability of these models arises in the form of hallucinations, wherein the model generates content that is fundamentally incorrect. To address this issue, a prevalent approach involves minimising hallucinations by adjusting the sampling randomness of the foundation model during execution, achieved through the manipulation of the temperature hyperparameter. This paper delves into the validity of this method and explores its associated caveats. The findings are subsequently distilled into comprehensible insights applicable to a diverse audience. Moreover, the paper outlines potential avenues for future research, offering a foundation for further exploration in this domain. Bachelor's degree 2024-04-11T04:39:43Z 2024-04-11T04:39:43Z 2024 Final Year Project (FYP) Tan, M. Z. (2024). Reducing LLM hallucinations: exploring the efficacy of temperature adjustment through empirical examination and analysis. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/174789 https://hdl.handle.net/10356/174789 en SCSE23-0294 application/pdf Nanyang Technological University |
institution |
Nanyang Technological University |
building |
NTU Library |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
NTU Library |
collection |
DR-NTU |
language |
English |
topic |
Computer and Information Science Hallucination Foundation model |
spellingShingle |
Computer and Information Science Hallucination Foundation model Tan, Max Zheyuan Reducing LLM hallucinations: exploring the efficacy of temperature adjustment through empirical examination and analysis |
description |
Foundation models are gaining widespread prominence both in industrial applications and among individual users. In the tech landscape today, prompt engineers play a crucial role by crafting industry-standard prompts, empowering companies to enhance productivity, engage with customers, and automate various tasks. At the same time, individuals leverage publicly available foundation models such as OpenAI's ChatGPT for everyday activities like summarising text or composing emails. However, a significant challenge to the reliability and usability of these models arises in the form of hallucinations, wherein the model generates content that is fundamentally incorrect.
To address this issue, a prevalent approach involves minimising hallucinations by adjusting the sampling randomness of the foundation model during execution, achieved through the manipulation of the temperature hyperparameter. This paper delves into the validity of this method and explores its associated caveats. The findings are subsequently distilled into comprehensible insights applicable to a diverse audience. Moreover, the paper outlines potential avenues for future research, offering a foundation for further exploration in this domain. |
author2 |
Jun Zhao |
author_facet |
Jun Zhao Tan, Max Zheyuan |
format |
Final Year Project |
author |
Tan, Max Zheyuan |
author_sort |
Tan, Max Zheyuan |
title |
Reducing LLM hallucinations: exploring the efficacy of temperature adjustment through empirical examination and analysis |
title_short |
Reducing LLM hallucinations: exploring the efficacy of temperature adjustment through empirical examination and analysis |
title_full |
Reducing LLM hallucinations: exploring the efficacy of temperature adjustment through empirical examination and analysis |
title_fullStr |
Reducing LLM hallucinations: exploring the efficacy of temperature adjustment through empirical examination and analysis |
title_full_unstemmed |
Reducing LLM hallucinations: exploring the efficacy of temperature adjustment through empirical examination and analysis |
title_sort |
reducing llm hallucinations: exploring the efficacy of temperature adjustment through empirical examination and analysis |
publisher |
Nanyang Technological University |
publishDate |
2024 |
url |
https://hdl.handle.net/10356/174789 |
_version_ |
1800916116952842240 |