Hallucination detection: Robustly discerning reliable answers in Large Language Models

Large language models (LLMs) have gained widespread adoption in various natural language processing tasks, including question answering and dialogue systems. However, a major drawback of LLMs is the issue of hallucination, where they generate unfaithful or inconsistent content that deviates from the...

وصف كامل

محفوظ في:
التفاصيل البيبلوغرافية
المؤلفون الرئيسيون: CHEN, Yuyuan, FU, Qiang, YUAN, Yichen, WEN, Zhihao, FAN, Ge, LIU, Dayiheng, ZHANG, Dongmei, LI, Zhixu, XIAO, Yanghua
التنسيق: text
اللغة:English
منشور في: Institutional Knowledge at Singapore Management University 2023
الموضوعات:
الوصول للمادة أونلاين:https://ink.library.smu.edu.sg/sis_research/8464
https://ink.library.smu.edu.sg/context/sis_research/article/9467/viewcontent/3583780.3614905_pv.pdf
الوسوم: إضافة وسم
لا توجد وسوم, كن أول من يضع وسما على هذه التسجيلة!
المؤسسة: Singapore Management University
اللغة: English