Vulnerability analysis on noise-injection based hardware attack on deep neural networks
Despite superior accuracy on most vision recognition tasks, deep neural networks are susceptible to adversarial examples. Recent studies show that adding carefully crafted small perturbations on input layer can mislead a classifier into arbitrary categories. However, most adversarial attack algorith...
Saved in:
Main Authors: | , , |
---|---|
其他作者: | |
格式: | Conference or Workshop Item |
語言: | English |
出版: |
2020
|
主題: | |
在線閱讀: | https://hdl.handle.net/10356/136863 |
標簽: |
添加標簽
沒有標簽, 成為第一個標記此記錄!
|
機構: | Nanyang Technological University |
語言: | English |