Moving target defense for embedded deep visual sensing against adversarial examples
Deep learning-based visual sensing has achieved attractive accuracy but is shown vulnerable to adversarial example attacks. Specifically, once the attackers obtain the deep model, they can construct adversarial examples to mislead the model to yield wrong classification results. Deployable adversari...
Saved in:
Main Authors: | Song, Qun, Yan, Zhenyu, Tan, Rui |
---|---|
Other Authors: | School of Computer Science and Engineering |
Format: | Conference or Workshop Item |
Language: | English |
Published: |
2020
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/136723 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Similar Items
-
Attack as defense: Characterizing adversarial examples using robustness
by: ZHAO, Zhe, et al.
Published: (2021) -
Defense on unrestricted adversarial examples
by: Sim, Chee Xian
Published: (2023) -
Targeted universal adversarial examples for remote sensing
by: Bai, Tao, et al.
Published: (2023) -
Improving security of autonomous cyber-physical systems against adversarial examples
by: Song, Qun
Published: (2022) -
Adversarial attacks and defenses for visual signals
by: Cheng, Yupeng
Published: (2023)