Detecting adversarial examples for deep neural networks via layer directed discriminative noise injection
Deep learning is a popular powerful machine learning solution to the computer vision tasks. The most criticized vulnerability of deep learning is its poor tolerance towards adversarial images obtained by deliberately adding imperceptibly small perturbations to the clean inputs. Such negatives can d...
Saved in:
Main Authors: | Wang, Si, Liu, Wenye, Chang, Chip-Hong |
---|---|
Other Authors: | School of Electrical and Electronic Engineering |
Format: | Conference or Workshop Item |
Language: | English |
Published: |
2020
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/137128 https://doi.org/10.21979/N9/WCIL7X |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Similar Items
-
Targeted universal adversarial examples for remote sensing
by: Bai, Tao, et al.
Published: (2023) -
Privacy Risks of Securing Machine Learning Models against Adversarial Examples
by: Liwei Song, et al.
Published: (2020) -
Fired neuron rate based decision tree for detection of adversarial examples in DNNs
by: Wang, Si, et al.
Published: (2020) -
A new lightweight in-situ adversarial sample detector for edge deep neural network
by: Wang, Si, et al.
Published: (2021) -
Vulnerability analysis on noise-injection based hardware attack on deep neural networks
by: Liu, Wenye, et al.
Published: (2020)