Moving target defense for embedded deep visual sensing against adversarial examples
Deep learning-based visual sensing has achieved attractive accuracy but is shown vulnerable to adversarial example attacks. Specifically, once the attackers obtain the deep model, they can construct adversarial examples to mislead the model to yield wrong classification results. Deployable adversari...
Saved in:
Main Authors: | , , |
---|---|
Other Authors: | |
Format: | Conference or Workshop Item |
Language: | English |
Published: |
2020
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/136723 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
id |
sg-ntu-dr.10356-136723 |
---|---|
record_format |
dspace |
spelling |
sg-ntu-dr.10356-1367232021-01-08T02:31:44Z Moving target defense for embedded deep visual sensing against adversarial examples Song, Qun Yan, Zhenyu Tan, Rui School of Computer Science and Engineering Interdisciplinary Graduate School (IGS) The 17th ACM Conference on Embedded Networked Sensor Systems (SenSys 2019) Energy Research Institute @ NTU (ERI@N) Engineering::Computer science and engineering Software and Application Security Neural Networks Deep learning-based visual sensing has achieved attractive accuracy but is shown vulnerable to adversarial example attacks. Specifically, once the attackers obtain the deep model, they can construct adversarial examples to mislead the model to yield wrong classification results. Deployable adversarial examples such as small stickers pasted on the road signs and lanes have been shown effective in misleading advanced driver-assistance systems. Many existing countermeasures against adversarial examples build their security on the attackers' ignorance of the defense mechanisms. Thus, they fall short of following Kerckhoffs's principle and can be subverted once the attackers know the details of the defense. This paper applies the strategy of moving target defense (MTD) to generate multiple new deep models after system deployment, that will collaboratively detect and thwart adversarial examples. Our MTD design is based on the adversarial examples' minor transferability across different models. The post-deployment dynamically generated models significantly increase the bar of successful attacks. We also apply serial data fusion with early stopping to reduce the inference time by a factor of up to 5. Evaluation based on four datasets including a road sign dataset and two GPU-equipped Jetson embedded computing platforms shows the effectiveness of our approach. Accepted version 2020-01-14T02:57:17Z 2020-01-14T02:57:17Z 2019 Conference Paper Song, Q., Yan, Z., & Tan, R. (2019). Moving target defense for embedded deep visual sensing against adversarial examples. The 17th ACM Conference on Embedded Networked Sensor Systems (SenSys 2019). https://hdl.handle.net/10356/136723 en © 2019 Association for Computing Machinery (ACM). All rights reserved. This paper was published in The 17th ACM Conference on Embedded Networked Sensor Systems (SenSys 2019) and is made available with permission of Association for Computing Machinery (ACM). 14 p application/pdf |
institution |
Nanyang Technological University |
building |
NTU Library |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
NTU Library |
collection |
DR-NTU |
language |
English |
topic |
Engineering::Computer science and engineering Software and Application Security Neural Networks |
spellingShingle |
Engineering::Computer science and engineering Software and Application Security Neural Networks Song, Qun Yan, Zhenyu Tan, Rui Moving target defense for embedded deep visual sensing against adversarial examples |
description |
Deep learning-based visual sensing has achieved attractive accuracy but is shown vulnerable to adversarial example attacks. Specifically, once the attackers obtain the deep model, they can construct adversarial examples to mislead the model to yield wrong classification results. Deployable adversarial examples such as small stickers pasted on the road signs and lanes have been shown effective in misleading advanced driver-assistance systems. Many existing countermeasures against adversarial examples build their security on the attackers' ignorance of the defense mechanisms. Thus, they fall short of following Kerckhoffs's principle and can be subverted once the attackers know the details of the defense. This paper applies the strategy of moving target defense (MTD) to generate multiple new deep models after system deployment, that will collaboratively detect and thwart adversarial examples. Our MTD design is based on the adversarial examples' minor transferability across different models. The post-deployment dynamically generated models significantly increase the bar of successful attacks. We also apply serial data fusion with early stopping to reduce the inference time by a factor of up to 5. Evaluation based on four datasets including a road sign dataset and two GPU-equipped Jetson embedded computing platforms shows the effectiveness of our approach. |
author2 |
School of Computer Science and Engineering |
author_facet |
School of Computer Science and Engineering Song, Qun Yan, Zhenyu Tan, Rui |
format |
Conference or Workshop Item |
author |
Song, Qun Yan, Zhenyu Tan, Rui |
author_sort |
Song, Qun |
title |
Moving target defense for embedded deep visual sensing against adversarial examples |
title_short |
Moving target defense for embedded deep visual sensing against adversarial examples |
title_full |
Moving target defense for embedded deep visual sensing against adversarial examples |
title_fullStr |
Moving target defense for embedded deep visual sensing against adversarial examples |
title_full_unstemmed |
Moving target defense for embedded deep visual sensing against adversarial examples |
title_sort |
moving target defense for embedded deep visual sensing against adversarial examples |
publishDate |
2020 |
url |
https://hdl.handle.net/10356/136723 |
_version_ |
1688665337475629056 |