TeLLMe what you see: using LLMs to explain neurons in vision models
As the role of machine learning models continues to expand across diverse fields, the demand for model interpretability grows. This is particularly crucial for deep learning models, which are often referred to as black boxes, due to their highly nonlinear nature. This paper proposes a novel method f...
Saved in:
Main Author: | Guertler, Leon |
---|---|
Other Authors: | Luu Anh Tuan |
Format: | Final Year Project |
Language: | English |
Published: |
Nanyang Technological University
2024
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/174298 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Similar Items
-
Demystifying AI: bridging the explainability gap in LLMs
by: Chan, Darren Inn Siew
Published: (2024) -
Toward conversational interpretations of neural networks: data collection
by: Yeow, Ming Xuan
Published: (2024) -
Believing the bot: examining what makes us trust large language models (LLMs) for political information
by: Deng, Nicholas Yi Dar, et al.
Published: (2024) -
Explainable AI for medical over-investigation identification
by: Suresh Kumar Rathika
Published: (2024) -
Building more explainable artificial intelligence with argumentation
by: Zeng, Zhiwei, et al.
Published: (2020)