Mitigating style-image hallucination in large vision language models
LLMs are widely applied across various domains, yet a significant challenge remains—their performance deteriorates sharply in out-of-domain scenarios, often leading to increased hallucinations. Despite its importance, this phenomenon has received limited attention in academic research. To address th...
Saved in:
Main Author: | He, Guoshun |
---|---|
Other Authors: | Alex Chichung Kot |
Format: | Thesis-Master by Coursework |
Language: | English |
Published: |
Nanyang Technological University
2025
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/182918 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Similar Items
-
Mitigating fine-grained hallucination by fine-tuning large vision-language models with caption rewrites
by: WANG, Lei, et al.
Published: (2024) -
More trustworthy generative AI through hallucination reduction
by: He, Guoshun
Published: (2024) -
Hallucination detection: Robustly discerning reliable answers in Large Language Models
by: CHEN, Yuyuan, et al.
Published: (2023) -
Reducing LLM hallucinations: exploring the efficacy of temperature adjustment through empirical examination and analysis
by: Tan, Max Zheyuan
Published: (2024) -
Joint face hallucination and deblurring via structure generation and detail enhancement
by: SONG, Yibing, et al.
Published: (2019)