More trustworthy generative AI through hallucination reduction
In the current wave of rapid development in artificial intelligence, AI is being widely applied in various industries. In this process, the reliability of artificial intelligence is receiving increasing attention. In current research, people largely focus on hallucination to study the reliabil...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Final Year Project |
Language: | English |
Published: |
Nanyang Technological University
2024
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/177162 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | In the current wave of rapid development in artificial intelligence, AI is
being widely applied in various industries. In this process, the reliability of
artificial intelligence is receiving increasing attention. In current research,
people largely focus on hallucination to study the reliability of multimodal
large-scale language models. The existence of the hallucination problem
can lead to the output of misleading information for users, as well as serious
problems such as causing security risks and even a decrease in trust in large
models. This report explores the impact of hallucination on the reliability
of multimodal large-scale language models, starting from the perspective
of hallucination, and employs a set of evaluation criteria to compare the
reliability among different large-scale language models. |
---|