Targeted universal adversarial examples for remote sensing
Researchers are focusing on the vulnerabilities of deep learning models for remote sensing; various attack methods have been proposed, including universal adversarial examples. Existing universal adversarial examples, however, are only designed to fool deep learning models rather than target specifi...
Saved in:
Main Authors: | , , |
---|---|
其他作者: | |
格式: | Article |
語言: | English |
出版: |
2023
|
主題: | |
在線閱讀: | https://hdl.handle.net/10356/165409 |
標簽: |
添加標簽
沒有標簽, 成為第一個標記此記錄!
|
總結: | Researchers are focusing on the vulnerabilities of deep learning models for remote sensing; various attack methods have been proposed, including universal adversarial examples. Existing universal adversarial examples, however, are only designed to fool deep learning models rather than target specific goals, i.e., targeted attacks. To this end, we propose two variants of universal adversarial examples called targeted universal adversarial examples and source-targeted universal adversarial examples. Extensive experiments on three popular datasets showed strong attackability of the two targeted adversarial variants. We hope such strong attacks can inspire and motivate research on the defenses against adversarial examples in remote sensing. |
---|