Moving target defense for embedded deep visual sensing against adversarial examples

Deep learning-based visual sensing has achieved attractive accuracy but is shown vulnerable to adversarial example attacks. Specifically, once the attackers obtain the deep model, they can construct adversarial examples to mislead the model to yield wrong classification results. Deployable adversari...

全面介紹

Saved in:
書目詳細資料
Main Authors: Song, Qun, Yan, Zhenyu, Tan, Rui
其他作者: School of Computer Science and Engineering
格式: Conference or Workshop Item
語言:English
出版: 2020
主題:
在線閱讀:https://hdl.handle.net/10356/136723
標簽: 添加標簽
沒有標簽, 成為第一個標記此記錄!
機構: Nanyang Technological University
語言: English
實物特徵
總結:Deep learning-based visual sensing has achieved attractive accuracy but is shown vulnerable to adversarial example attacks. Specifically, once the attackers obtain the deep model, they can construct adversarial examples to mislead the model to yield wrong classification results. Deployable adversarial examples such as small stickers pasted on the road signs and lanes have been shown effective in misleading advanced driver-assistance systems. Many existing countermeasures against adversarial examples build their security on the attackers' ignorance of the defense mechanisms. Thus, they fall short of following Kerckhoffs's principle and can be subverted once the attackers know the details of the defense. This paper applies the strategy of moving target defense (MTD) to generate multiple new deep models after system deployment, that will collaboratively detect and thwart adversarial examples. Our MTD design is based on the adversarial examples' minor transferability across different models. The post-deployment dynamically generated models significantly increase the bar of successful attacks. We also apply serial data fusion with early stopping to reduce the inference time by a factor of up to 5. Evaluation based on four datasets including a road sign dataset and two GPU-equipped Jetson embedded computing platforms shows the effectiveness of our approach.