Detection of adversarial attacks via disentangling natural images and perturbations

The vulnerability of deep neural networks against adversarial attacks, i.e., imperceptible adversarial perturbations can easily give rise to wrong predictions, poses a huge threat to the security of their real-world deployments. In this paper, a novel Adversarial Detection method via Disentangling N...

Full description

Saved in:
Bibliographic Details
Main Authors: Qing, Yuanyuan, Bai, Tao, Liu, Zhuotao, Moulin, Pierre, Wen, Bihan
Other Authors: School of Electrical and Electronic Engineering
Format: Article
Language:English
Published: 2024
Subjects:
Online Access:https://hdl.handle.net/10356/178082
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English