AutoFocus: Interpreting attention-based neural networks by code perturbation
Despite being adopted in software engineering tasks, deep neural networks are treated mostly as a black box due to the difficulty in interpreting how the networks infer the outputs from the inputs. To address this problem, we propose AutoFocus, an automated approach for rating and visualizing the im...
Saved in:
Main Authors: | BUI, Duy Quoc Nghi, YU, Yijun, JIANG, Lingxiao |
---|---|
Format: | text |
Language: | English |
Published: |
Institutional Knowledge at Singapore Management University
2019
|
Subjects: | |
Online Access: | https://ink.library.smu.edu.sg/sis_research/4817 https://ink.library.smu.edu.sg/context/sis_research/article/5820/viewcontent/ase19autofocus.pdf |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Singapore Management University |
Language: | English |
Similar Items
-
Bilateral dependency neural networks for cross-language algorithm classification
by: BUI, Duy Quoc Nghi, et al.
Published: (2019) -
The oscillation of perturbed functional differential equations
by: Agarwal, R.P., et al.
Published: (2014) -
ATTENTIVE RECURRENT NEURAL NETWORKS
by: LI MINGMING
Published: (2017) -
Novel deep learning methods combined with static analysis for source code processing
by: BUI, Duy Quoc Nghi
Published: (2020) -
Overlapping attentional networks yield divergent behavioral predictions across tasks: Neuromarkers for diffuse and focused attention?
by: Wu, E.X.W., et al.
Published: (2021)