Detection of adversarial attacks via disentangling natural images and perturbations
The vulnerability of deep neural networks against adversarial attacks, i.e., imperceptible adversarial perturbations can easily give rise to wrong predictions, poses a huge threat to the security of their real-world deployments. In this paper, a novel Adversarial Detection method via Disentangling N...
Saved in:
Main Authors: | Qing, Yuanyuan, Bai, Tao, Liu, Zhuotao, Moulin, Pierre, Wen, Bihan |
---|---|
Other Authors: | School of Electrical and Electronic Engineering |
Format: | Article |
Language: | English |
Published: |
2024
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/178082 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Similar Items
-
Targeted universal adversarial examples for remote sensing
by: Bai, Tao, et al.
Published: (2023) -
Challenges and countermeasures for adversarial attacks on deep reinforcement learning
by: Ilahi, Inaam, et al.
Published: (2022) -
Attack as defense: Characterizing adversarial examples using robustness
by: ZHAO, Zhe, et al.
Published: (2021) -
Adversarial attack defenses for neural networks
by: Puah, Yi Hao
Published: (2024) -
Robust data-driven adversarial false data injection attack detection method with deep Q-network in power systems
by: Ran, Xiaohong, et al.
Published: (2024)