More trustworthy generative AI through hallucination reduction
In the current wave of rapid development in artificial intelligence, AI is being widely applied in various industries. In this process, the reliability of artificial intelligence is receiving increasing attention. In current research, people largely focus on hallucination to study the reliabil...
Saved in:
Main Author: | He, Guoshun |
---|---|
Other Authors: | Alex Chichung Kot |
Format: | Final Year Project |
Language: | English |
Published: |
Nanyang Technological University
2024
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/177162 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Similar Items
-
Towards a more trustworthy generative artificial intelligence
by: Cheong, Ben Wee Joon
Published: (2024) -
Reducing LLM hallucinations: exploring the efficacy of temperature adjustment through empirical examination and analysis
by: Tan, Max Zheyuan
Published: (2024) -
Joint face hallucination and deblurring via structure generation and detail enhancement
by: SONG, Yibing, et al.
Published: (2019) -
Data is the new gold: A Singapore perspective on the duty of care concerning a dataset’s role in contributing to bias and AI hallucinations
by: Daniel SEAH,
Published: (2023) -
Generative AI Art - portrait photography
by: Ma, Jiaxin
Published: (2024)