HyDRA: hypergradient data relevance analysis for interpreting deep neural networks.
The behaviors of deep neural networks (DNNs) are notoriously resistant to human interpretations. In this paper, we propose Hypergradient Data Relevance Analysis, or HYDRA, which interprets the predictions made by DNNs as effects of their training data. Existing approaches generally estimate data con...
Saved in:
Main Authors: | Chen, Yuanyuan, Li, Boyang, Yu, Han, Wu, Pengcheng, Miao, Chunyan |
---|---|
Other Authors: | School of Computer Science and Engineering |
Format: | Conference or Workshop Item |
Language: | English |
Published: |
2021
|
Subjects: | |
Online Access: | https://aaai.org/Conferences/AAAI-21/ https://hdl.handle.net/10356/147652 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Similar Items
-
An interpretable neural fuzzy inference system for predictions of underpricing in initial public offerings
by: Qian, Xiaolin, et al.
Published: (2018) -
Towards interpreting recurrent neural networks through probabilistic abstraction
by: DONG, Guoliang, et al.
Published: (2020) -
Leveraging the trade-off between accuracy and interpretability in a hybrid intelligent system
by: Miao, Chunyan, et al.
Published: (2018) -
EXPLAINING AND IMPROVING DEEP NEURAL NETWORKS VIA CONCEPT-BASED EXPLANATIONS
by: SANDAREKA KUMUDU KUMARI WICKRAMANAYAKE
Published: (2022) -
AutoFocus: Interpreting attention-based neural networks by code perturbation
by: BUI, Duy Quoc Nghi, et al.
Published: (2019)