MITIGATING HALLUCINATION IN AUTOMATED INTERVIEWS USING A CHAIN-OF-VERIFICATION APPROACH ON LLMS

The use of Large Language Models (LLMs) in recruitment processes presents significant challenges, primarily due to the inherent tendency of LLMs to produce hallucinations. This research develops an anti-hallucination component based on the chain-of-verification method, utilizing GPT-3.5 and GPT-4...

Full description

Saved in:
Bibliographic Details
Main Author: Bintang Nurmansyah, Ilham
Format: Final Project
Language:Indonesia
Online Access:https://digilib.itb.ac.id/gdl/view/85547
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Institut Teknologi Bandung
Language: Indonesia