Vulnerability analysis on noise-injection based hardware attack on deep neural networks
Despite superior accuracy on most vision recognition tasks, deep neural networks are susceptible to adversarial examples. Recent studies show that adding carefully crafted small perturbations on input layer can mislead a classifier into arbitrary categories. However, most adversarial attack algorith...
Saved in:
Main Authors: | Liu, Wenye, Wang, Si, Chang, Chip-Hong |
---|---|
Other Authors: | School of Electrical and Electronic Engineering |
Format: | Conference or Workshop Item |
Language: | English |
Published: |
2020
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/136863 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Similar Items
-
A forward error compensation approach for fault resilient deep neural network accelerator design
by: Liu, Wenye, et al.
Published: (2022) -
Stealthy and robust backdoor attack on deep neural networks based on data augmentation
by: Xu, Chaohui, et al.
Published: (2024) -
Detecting adversarial examples for deep neural networks via layer directed discriminative noise injection
by: Wang, Si, et al.
Published: (2020) -
Analysis of circuit aging on accuracy degradation of deep neural network accelerator
by: Liu, Wenye, et al.
Published: (2020) -
An imperceptible data augmentation based blackbox clean-label backdoor attack on deep neural networks
by: Xu, Chaohui, et al.
Published: (2024)