MITIGATING HALLUCINATION IN AUTOMATED INTERVIEWS USING A CHAIN-OF-VERIFICATION APPROACH ON LLMS
The use of Large Language Models (LLMs) in recruitment processes presents significant challenges, primarily due to the inherent tendency of LLMs to produce hallucinations. This research develops an anti-hallucination component based on the chain-of-verification method, utilizing GPT-3.5 and GPT-4...
Saved in:
Main Author: | |
---|---|
Format: | Final Project |
Language: | Indonesia |
Online Access: | https://digilib.itb.ac.id/gdl/view/85547 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Institut Teknologi Bandung |
Language: | Indonesia |
Be the first to leave a comment!