HyDRA: hypergradient data relevance analysis for interpreting deep neural networks.
The behaviors of deep neural networks (DNNs) are notoriously resistant to human interpretations. In this paper, we propose Hypergradient Data Relevance Analysis, or HYDRA, which interprets the predictions made by DNNs as effects of their training data. Existing approaches generally estimate data con...
محفوظ في:
المؤلفون الرئيسيون: | , , , , |
---|---|
مؤلفون آخرون: | |
التنسيق: | Conference or Workshop Item |
اللغة: | English |
منشور في: |
2021
|
الموضوعات: | |
الوصول للمادة أونلاين: | https://aaai.org/Conferences/AAAI-21/ https://hdl.handle.net/10356/147652 |
الوسوم: |
إضافة وسم
لا توجد وسوم, كن أول من يضع وسما على هذه التسجيلة!
|
المؤسسة: | Nanyang Technological University |
اللغة: | English |
id |
sg-ntu-dr.10356-147652 |
---|---|
record_format |
dspace |
spelling |
sg-ntu-dr.10356-1476522025-04-29T05:33:52Z HyDRA: hypergradient data relevance analysis for interpreting deep neural networks. Chen, Yuanyuan Li, Boyang Yu, Han Wu, Pengcheng Miao, Chunyan School of Computer Science and Engineering 35th AAAI Conference on Artificial Intelligence (AAAI 2021) Alibaba-NTU Joint Research Institute Interpretability Neural Network Machine Learning Artificial Intelligence The behaviors of deep neural networks (DNNs) are notoriously resistant to human interpretations. In this paper, we propose Hypergradient Data Relevance Analysis, or HYDRA, which interprets the predictions made by DNNs as effects of their training data. Existing approaches generally estimate data contributions around the final model parameters and ignore how the training data shape the optimization trajectory. By unrolling the hypergradient of test loss w.r.t. the weights of training data, HYDRA assesses the contribution of training data toward test data points throughout the training trajectory. In order to accelerate computation, we remove the Hessian from the calculation and prove that, under moderate conditions, the approximation error is bounded. Corroborating this theoretical claim, empirical results indicate the error is indeed small. In addition, we quantitatively demonstrate that HYDRA outperforms influence functions in accurately estimating data contribution and detecting noisy data labels. The source code is available at https://github.com/cyyever/aaaihydra8686. AI Singapore National Research Foundation (NRF) This research is supported by Alibaba Group through Alibaba Innovative Research (AIR) Program and Alibaba- NTU Singapore Joint Research Institute (JRI) (Alibaba- NTU-AIR2019B1), Nanyang Technological University, Singapore; the Nanyang Assistant Professorship (NAP); NTUSDU- CFAIR (NSC-2019-011); the National Research Foundation, Singapore under its AI Singapore Programme (AISG Award No: AISG-GC-2019-003); the RIE 2020 Advanced Manufacturing and Engineering (AME) Programmatic Fund (No. A20G8b0102), Singapore; and the National Research Foundation, Singapore, Prime Minister’s Office under its NRF Investigatorship Programme (NRFI Award No: NRFNRFI05- 2019-0002). Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not reflect the views of the funding agencies. 2021-08-11T02:09:38Z 2021-08-11T02:09:38Z 2020 Conference Paper Chen, Y., Li, B., Yu, H., Wu, P. & Miao, C. (2020). HyDRA: hypergradient data relevance analysis for interpreting deep neural networks.. 35th AAAI Conference on Artificial Intelligence (AAAI 2021). https://aaai.org/Conferences/AAAI-21/ https://hdl.handle.net/10356/147652 en Alibaba-NTU-AIR2019B1 NSC-2019-011 AISG-GC-2019-003 A20G8b0102 NRFI05-2019-0002 © 2021 Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. |
institution |
Nanyang Technological University |
building |
NTU Library |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
NTU Library |
collection |
DR-NTU |
language |
English |
topic |
Interpretability Neural Network Machine Learning Artificial Intelligence |
spellingShingle |
Interpretability Neural Network Machine Learning Artificial Intelligence Chen, Yuanyuan Li, Boyang Yu, Han Wu, Pengcheng Miao, Chunyan HyDRA: hypergradient data relevance analysis for interpreting deep neural networks. |
description |
The behaviors of deep neural networks (DNNs) are notoriously resistant to human interpretations. In this paper, we propose Hypergradient Data Relevance Analysis, or HYDRA, which interprets the predictions made by DNNs as effects of their training data. Existing approaches generally estimate data contributions around the final model parameters and ignore how the training data shape the optimization trajectory. By unrolling the hypergradient of test loss w.r.t. the weights of training data, HYDRA assesses the contribution of training data toward test data points throughout the training trajectory. In order to accelerate computation, we remove the Hessian from the calculation and prove that, under moderate conditions, the approximation error is bounded. Corroborating this theoretical claim, empirical results indicate the error is indeed small. In addition, we quantitatively demonstrate that HYDRA outperforms influence functions in accurately estimating data contribution and detecting noisy data labels. The source code is available at https://github.com/cyyever/aaaihydra8686. |
author2 |
School of Computer Science and Engineering |
author_facet |
School of Computer Science and Engineering Chen, Yuanyuan Li, Boyang Yu, Han Wu, Pengcheng Miao, Chunyan |
format |
Conference or Workshop Item |
author |
Chen, Yuanyuan Li, Boyang Yu, Han Wu, Pengcheng Miao, Chunyan |
author_sort |
Chen, Yuanyuan |
title |
HyDRA: hypergradient data relevance analysis for interpreting deep neural networks. |
title_short |
HyDRA: hypergradient data relevance analysis for interpreting deep neural networks. |
title_full |
HyDRA: hypergradient data relevance analysis for interpreting deep neural networks. |
title_fullStr |
HyDRA: hypergradient data relevance analysis for interpreting deep neural networks. |
title_full_unstemmed |
HyDRA: hypergradient data relevance analysis for interpreting deep neural networks. |
title_sort |
hydra: hypergradient data relevance analysis for interpreting deep neural networks. |
publishDate |
2021 |
url |
https://aaai.org/Conferences/AAAI-21/ https://hdl.handle.net/10356/147652 |
_version_ |
1831146552926666752 |