MITIGATING HALLUCINATION IN AUTOMATED INTERVIEWS USING A CHAIN-OF-VERIFICATION APPROACH ON LLMS
The use of Large Language Models (LLMs) in recruitment processes presents significant challenges, primarily due to the inherent tendency of LLMs to produce hallucinations. This research develops an anti-hallucination component based on the chain-of-verification method, utilizing GPT-3.5 and GPT-4...
Saved in:
Main Author: | Bintang Nurmansyah, Ilham |
---|---|
Format: | Final Project |
Language: | Indonesia |
Online Access: | https://digilib.itb.ac.id/gdl/view/85547 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Institut Teknologi Bandung |
Language: | Indonesia |
Similar Items
-
Chain of preference optimization: Improving chain-of-thought reasoning in LLMs
by: ZHANG, Xuan, et al.
Published: (2024) -
LLMs-as-instructors : Learning from errors toward automating model improvement
by: YING, Jiahao, et al.
Published: (2024) -
Mitigating fine-grained hallucination by fine-tuning large vision-language models with caption rewrites
by: WANG, Lei, et al.
Published: (2024) -
Cue-CoT: Chain-of-thought prompting for responding to in-depth dialogue questions with LLMs
by: WANG, Hongru, et al.
Published: (2023) -
Relationism and a robust account of hallucinations
by: Tang, Lemuel Lemin
Published: (2021)