AutoFocus: Interpreting attention-based neural networks by code perturbation
Despite being adopted in software engineering tasks, deep neural networks are treated mostly as a black box due to the difficulty in interpreting how the networks infer the outputs from the inputs. To address this problem, we propose AutoFocus, an automated approach for rating and visualizing the im...
محفوظ في:
المؤلفون الرئيسيون: | BUI, Duy Quoc Nghi, YU, Yijun, JIANG, Lingxiao |
---|---|
التنسيق: | text |
اللغة: | English |
منشور في: |
Institutional Knowledge at Singapore Management University
2019
|
الموضوعات: | |
الوصول للمادة أونلاين: | https://ink.library.smu.edu.sg/sis_research/4817 https://ink.library.smu.edu.sg/context/sis_research/article/5820/viewcontent/ase19autofocus.pdf |
الوسوم: |
إضافة وسم
لا توجد وسوم, كن أول من يضع وسما على هذه التسجيلة!
|
مواد مشابهة
-
Bilateral dependency neural networks for cross-language algorithm classification
بواسطة: BUI, Duy Quoc Nghi, وآخرون
منشور في: (2019) -
The oscillation of perturbed functional differential equations
بواسطة: Agarwal, R.P., وآخرون
منشور في: (2014) -
ATTENTIVE RECURRENT NEURAL NETWORKS
بواسطة: LI MINGMING
منشور في: (2017) -
Novel deep learning methods combined with static analysis for source code processing
بواسطة: BUI, Duy Quoc Nghi
منشور في: (2020) -
Overlapping attentional networks yield divergent behavioral predictions across tasks: Neuromarkers for diffuse and focused attention?
بواسطة: Wu, E.X.W., وآخرون
منشور في: (2021)