Toward conversational interpretations of neural networks: data collection
Neural networks are powerful techniques for automated decision making. However, they are also blackboxes, which human experts find difficult to understand. Recent work performed at NTU and internationally suggests that conversation is an effective form of interpreting neural networks to layperson us...
Saved in:
Main Author: | Yeow, Ming Xuan |
---|---|
Other Authors: | Li Boyang |
Format: | Final Year Project |
Language: | English |
Published: |
Nanyang Technological University
2024
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/181279 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Similar Items
-
TeLLMe what you see: using LLMs to explain neurons in vision models
by: Guertler, Leon
Published: (2024) -
Demystifying AI: bridging the explainability gap in LLMs
by: Chan, Darren Inn Siew
Published: (2024) -
May I ask a follow-up question? Understanding the benefits of conversations inneural network explainability
by: Zhang, Tong, et al.
Published: (2024) -
Programmatic policies for interpretable reinforcement learning using pre-trained models
by: Tu, Xia Yang
Published: (2024) -
Explainable AI for medical over-investigation identification
by: Suresh Kumar Rathika
Published: (2024)