Framework to evaluate and test defences against hallucination in large language model
The recent advancement of AI, particularly the large language models (LLMs) has en- abled unprecedented capabilities in natural language processing (NLP) tasks, including things such as content generation, translation, and question answering (QA). However, just like any new technology, LLMs faced...
Saved in:
Main Author: | Pan, Johnny Shi Han |
---|---|
Other Authors: | Luu Anh Tuan |
Format: | Final Year Project |
Language: | English |
Published: |
Nanyang Technological University
2024
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/180892 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Similar Items
-
Automating dataset updates towards reliable and timely evaluation of Large Language Models
by: YING, Jiahao, et al.
Published: (2024) -
Large language model (LLM) with retrieve-augmented generation (RAG) for legal case research
by: Liu, Zihao
Published: (2024) -
Composition distillation for semantic sentence embeddings
by: Vaanavan, Sezhiyan
Published: (2024) -
LLMs-as-instructors : Learning from errors toward automating model improvement
by: YING, Jiahao, et al.
Published: (2024) -
Programmatic policies for interpretable reinforcement learning using pre-trained models
by: Tu, Xia Yang
Published: (2024)