Framework to evaluate and test defences against hallucination in large language model
The recent advancement of AI, particularly the large language models (LLMs) has en- abled unprecedented capabilities in natural language processing (NLP) tasks, including things such as content generation, translation, and question answering (QA). However, just like any new technology, LLMs faced...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Final Year Project |
Language: | English |
Published: |
Nanyang Technological University
2024
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/180892 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |