Fired neuron rate based decision tree for detection of adversarial examples in DNNs
Deep neural network (DNN) is a prevalent machine learning solution to computer vision problems. The most criticized vulnerability of deep learning is its susceptibility towards adversarial images crafted by maliciously adding infinitesimal distortions to the benign inputs. Such negatives can fool a...
Saved in:
Main Authors: | Wang, Si, Liu, Wenye, Chang, Chip-Hong |
---|---|
Other Authors: | School of Electrical and Electronic Engineering |
Format: | Conference or Workshop Item |
Language: | English |
Published: |
2020
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/144346 https://doi.org/10.21979/N9/YPY0EB |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Similar Items
-
Detecting adversarial examples for deep neural networks via layer directed discriminative noise injection
by: Wang, Si, et al.
Published: (2020) -
Targeted universal adversarial examples for remote sensing
by: Bai, Tao, et al.
Published: (2023) -
A new lightweight in-situ adversarial sample detector for edge deep neural network
by: Wang, Si, et al.
Published: (2021) -
Privacy Risks of Securing Machine Learning Models against Adversarial Examples
by: Liwei Song, et al.
Published: (2020) -
Attack as defense: Characterizing adversarial examples using robustness
by: ZHAO, Zhe, et al.
Published: (2021)