A learning and masking approach to secure learning
Deep Neural Networks (DNNs) have been shown to be vulnerable against adversarial examples, which are data points cleverly constructed to fool the classifier. Such attacks can be devastating in practice, especially as DNNs are being applied to ever increasing critical tasks like image recognition in...
Saved in:
Main Authors: | , , |
---|---|
Format: | text |
Language: | English |
Published: |
Institutional Knowledge at Singapore Management University
2018
|
Subjects: | |
Online Access: | https://ink.library.smu.edu.sg/sis_research/4793 https://ink.library.smu.edu.sg/context/sis_research/article/5796/viewcontent/1709.04447.pdf |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Singapore Management University |
Language: | English |
id |
sg-smu-ink.sis_research-5796 |
---|---|
record_format |
dspace |
spelling |
sg-smu-ink.sis_research-57962020-01-16T10:12:44Z A learning and masking approach to secure learning NGUYEN, Linh WANG, Sky SINHA, Arunesh Deep Neural Networks (DNNs) have been shown to be vulnerable against adversarial examples, which are data points cleverly constructed to fool the classifier. Such attacks can be devastating in practice, especially as DNNs are being applied to ever increasing critical tasks like image recognition in autonomous driving. In this paper, we introduce a new perspective on the problem. We do so by first defining robustness of a classifier to adversarial exploitation. Next, we show that the problem of adversarial example generation can be posed as learning problem. We also categorize attacks in literature into high and low perturbation attacks; well-known attacks like FGSM [11] and our attack produce higher perturbation adversarial examples while the more potent but computationally inefficient Carlini-Wagner [5] (CW) attack is low perturbation. Next, we show that the dual approach of the attack learning problem can be used as a defensive technique that is effective against high perturbation attacks. Finally, we show that a classifier masking method achieved by adding noise to the a neural network’s logit output protects against low distortion attacks such as the CW attack. We also show that both our learning and masking defense can work simultaneously to protect against multiple attacks. We demonstrate the efficacy of our techniques by experimenting with the MNIST and CIFAR-10 datasets. 2018-10-01T07:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/4793 info:doi/10.1007/978-3-030-01554-1_26 https://ink.library.smu.edu.sg/context/sis_research/article/5796/viewcontent/1709.04447.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University adversarial examples robust learning Databases and Information Systems Software Engineering |
institution |
Singapore Management University |
building |
SMU Libraries |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
SMU Libraries |
collection |
InK@SMU |
language |
English |
topic |
adversarial examples robust learning Databases and Information Systems Software Engineering |
spellingShingle |
adversarial examples robust learning Databases and Information Systems Software Engineering NGUYEN, Linh WANG, Sky SINHA, Arunesh A learning and masking approach to secure learning |
description |
Deep Neural Networks (DNNs) have been shown to be vulnerable against adversarial examples, which are data points cleverly constructed to fool the classifier. Such attacks can be devastating in practice, especially as DNNs are being applied to ever increasing critical tasks like image recognition in autonomous driving. In this paper, we introduce a new perspective on the problem. We do so by first defining robustness of a classifier to adversarial exploitation. Next, we show that the problem of adversarial example generation can be posed as learning problem. We also categorize attacks in literature into high and low perturbation attacks; well-known attacks like FGSM [11] and our attack produce higher perturbation adversarial examples while the more potent but computationally inefficient Carlini-Wagner [5] (CW) attack is low perturbation. Next, we show that the dual approach of the attack learning problem can be used as a defensive technique that is effective against high perturbation attacks. Finally, we show that a classifier masking method achieved by adding noise to the a neural network’s logit output protects against low distortion attacks such as the CW attack. We also show that both our learning and masking defense can work simultaneously to protect against multiple attacks. We demonstrate the efficacy of our techniques by experimenting with the MNIST and CIFAR-10 datasets. |
format |
text |
author |
NGUYEN, Linh WANG, Sky SINHA, Arunesh |
author_facet |
NGUYEN, Linh WANG, Sky SINHA, Arunesh |
author_sort |
NGUYEN, Linh |
title |
A learning and masking approach to secure learning |
title_short |
A learning and masking approach to secure learning |
title_full |
A learning and masking approach to secure learning |
title_fullStr |
A learning and masking approach to secure learning |
title_full_unstemmed |
A learning and masking approach to secure learning |
title_sort |
learning and masking approach to secure learning |
publisher |
Institutional Knowledge at Singapore Management University |
publishDate |
2018 |
url |
https://ink.library.smu.edu.sg/sis_research/4793 https://ink.library.smu.edu.sg/context/sis_research/article/5796/viewcontent/1709.04447.pdf |
_version_ |
1770575032641126400 |