More trustworthy generative AI through hallucination reduction

In the current wave of rapid development in artificial intelligence, AI is being widely applied in various industries. In this process, the reliability of artificial intelligence is receiving increasing attention. In current research, people largely focus on hallucination to study the reliabil...

Full description

Saved in:
Bibliographic Details
Main Author: He, Guoshun
Other Authors: Alex Chichung Kot
Format: Final Year Project
Language:English
Published: Nanyang Technological University 2024
Subjects:
Online Access:https://hdl.handle.net/10356/177162
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-177162
record_format dspace
spelling sg-ntu-dr.10356-1771622024-05-31T15:43:33Z More trustworthy generative AI through hallucination reduction He, Guoshun Alex Chichung Kot School of Electrical and Electronic Engineering EACKOT@ntu.edu.sg Computer and Information Science Hallucination Generative AI Benchmark In the current wave of rapid development in artificial intelligence, AI is being widely applied in various industries. In this process, the reliability of artificial intelligence is receiving increasing attention. In current research, people largely focus on hallucination to study the reliability of multimodal large-scale language models. The existence of the hallucination problem can lead to the output of misleading information for users, as well as serious problems such as causing security risks and even a decrease in trust in large models. This report explores the impact of hallucination on the reliability of multimodal large-scale language models, starting from the perspective of hallucination, and employs a set of evaluation criteria to compare the reliability among different large-scale language models. Bachelor's degree 2024-05-27T05:45:32Z 2024-05-27T05:45:32Z 2024 Final Year Project (FYP) He, G. (2024). More trustworthy generative AI through hallucination reduction. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/177162 https://hdl.handle.net/10356/177162 en application/pdf Nanyang Technological University
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Computer and Information Science
Hallucination
Generative AI
Benchmark
spellingShingle Computer and Information Science
Hallucination
Generative AI
Benchmark
He, Guoshun
More trustworthy generative AI through hallucination reduction
description In the current wave of rapid development in artificial intelligence, AI is being widely applied in various industries. In this process, the reliability of artificial intelligence is receiving increasing attention. In current research, people largely focus on hallucination to study the reliability of multimodal large-scale language models. The existence of the hallucination problem can lead to the output of misleading information for users, as well as serious problems such as causing security risks and even a decrease in trust in large models. This report explores the impact of hallucination on the reliability of multimodal large-scale language models, starting from the perspective of hallucination, and employs a set of evaluation criteria to compare the reliability among different large-scale language models.
author2 Alex Chichung Kot
author_facet Alex Chichung Kot
He, Guoshun
format Final Year Project
author He, Guoshun
author_sort He, Guoshun
title More trustworthy generative AI through hallucination reduction
title_short More trustworthy generative AI through hallucination reduction
title_full More trustworthy generative AI through hallucination reduction
title_fullStr More trustworthy generative AI through hallucination reduction
title_full_unstemmed More trustworthy generative AI through hallucination reduction
title_sort more trustworthy generative ai through hallucination reduction
publisher Nanyang Technological University
publishDate 2024
url https://hdl.handle.net/10356/177162
_version_ 1800916350798921728