Understanding adversarial robustness via critical attacking route

Deep neural networks (DNNs) are vulnerable to adversarial examples which are generated by inputs with imperceptible perturbations. Understanding adversarial robustness of DNNs has become an important issue, which would for certain result in better practical deep learning applications. To address thi...

Full description

Saved in:
Bibliographic Details
Main Authors: LI, Tianlin, LIU, Aishan, LIU, Xianglong, XU, Yitao, ZHANG, Chongzhi, XIE, Xiaofei
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2021
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/7053
https://ink.library.smu.edu.sg/context/sis_research/article/8056/viewcontent/1_s2.0_S0020025520308124_main.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
id sg-smu-ink.sis_research-8056
record_format dspace
spelling sg-smu-ink.sis_research-80562024-02-28T01:05:09Z Understanding adversarial robustness via critical attacking route LI, Tianlin LIU, Aishan LIU, Xianglong XU, Yitao ZHANG, Chongzhi XIE, Xiaofei Deep neural networks (DNNs) are vulnerable to adversarial examples which are generated by inputs with imperceptible perturbations. Understanding adversarial robustness of DNNs has become an important issue, which would for certain result in better practical deep learning applications. To address this issue, we try to explain adversarial robustness for deep models from a new perspective of critical attacking route, which is computed by a gradient-based influence propagation strategy. Similar to rumor spreading in social net-works, we believe that adversarial noises are amplified and propagated through the critical attacking route. By exploiting neurons' influences layer by layer, we compose the critical attacking route with neurons that make the highest contributions towards model decision. In this paper, we first draw the close connection between adversarial robustness and critical attacking route, as the route makes the most non-trivial contributions to model predictions in the adversarial setting. By constraining the propagation process and node behaviors on this route, we could weaken the noise propagation and improve model robustness. Also, we find that critical attacking neurons are useful to evaluate sample adversarial hardness that images with higher stimulus are easier to be perturbed into adversarial examples. (C) 2020 The Author(s). Published by Elsevier Inc. 2021-02-01T08:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/7053 info:doi/10.1016/j.ins.2020.08.043 https://ink.library.smu.edu.sg/context/sis_research/article/8056/viewcontent/1_s2.0_S0020025520308124_main.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Critical attacking route Adversarial robustness Model interpretation Information Security Software Engineering
institution Singapore Management University
building SMU Libraries
continent Asia
country Singapore
Singapore
content_provider SMU Libraries
collection InK@SMU
language English
topic Critical attacking route
Adversarial robustness
Model interpretation
Information Security
Software Engineering
spellingShingle Critical attacking route
Adversarial robustness
Model interpretation
Information Security
Software Engineering
LI, Tianlin
LIU, Aishan
LIU, Xianglong
XU, Yitao
ZHANG, Chongzhi
XIE, Xiaofei
Understanding adversarial robustness via critical attacking route
description Deep neural networks (DNNs) are vulnerable to adversarial examples which are generated by inputs with imperceptible perturbations. Understanding adversarial robustness of DNNs has become an important issue, which would for certain result in better practical deep learning applications. To address this issue, we try to explain adversarial robustness for deep models from a new perspective of critical attacking route, which is computed by a gradient-based influence propagation strategy. Similar to rumor spreading in social net-works, we believe that adversarial noises are amplified and propagated through the critical attacking route. By exploiting neurons' influences layer by layer, we compose the critical attacking route with neurons that make the highest contributions towards model decision. In this paper, we first draw the close connection between adversarial robustness and critical attacking route, as the route makes the most non-trivial contributions to model predictions in the adversarial setting. By constraining the propagation process and node behaviors on this route, we could weaken the noise propagation and improve model robustness. Also, we find that critical attacking neurons are useful to evaluate sample adversarial hardness that images with higher stimulus are easier to be perturbed into adversarial examples. (C) 2020 The Author(s). Published by Elsevier Inc.
format text
author LI, Tianlin
LIU, Aishan
LIU, Xianglong
XU, Yitao
ZHANG, Chongzhi
XIE, Xiaofei
author_facet LI, Tianlin
LIU, Aishan
LIU, Xianglong
XU, Yitao
ZHANG, Chongzhi
XIE, Xiaofei
author_sort LI, Tianlin
title Understanding adversarial robustness via critical attacking route
title_short Understanding adversarial robustness via critical attacking route
title_full Understanding adversarial robustness via critical attacking route
title_fullStr Understanding adversarial robustness via critical attacking route
title_full_unstemmed Understanding adversarial robustness via critical attacking route
title_sort understanding adversarial robustness via critical attacking route
publisher Institutional Knowledge at Singapore Management University
publishDate 2021
url https://ink.library.smu.edu.sg/sis_research/7053
https://ink.library.smu.edu.sg/context/sis_research/article/8056/viewcontent/1_s2.0_S0020025520308124_main.pdf
_version_ 1794549716486193152