Fault-injection based attacks and countermeasure on deep neural network accelerators

The rapid development of deep learning accelerator has unlocked new applications that require local inference at the edge device. However, this trend of development to facilitate edge intelligence also invites new hardware-oriented attacks, which are different from and have more dreadful impact than...

全面介紹

Saved in:
書目詳細資料
主要作者: Liu, Wenye
其他作者: Chang Chip Hong
格式: Thesis-Doctor of Philosophy
語言:English
出版: Nanyang Technological University 2021
主題:
在線閱讀:https://hdl.handle.net/10356/152080
標簽: 添加標簽
沒有標簽, 成為第一個標記此記錄!
實物特徵
總結:The rapid development of deep learning accelerator has unlocked new applications that require local inference at the edge device. However, this trend of development to facilitate edge intelligence also invites new hardware-oriented attacks, which are different from and have more dreadful impact than the well-known adversarial examples. Existing hardware-based attacks on DNN focuses on model interpolation. Many of these attacks are limited to general-purpose processor instances or DNN accelerators on small scale applications. Hardware-oriented attacks can directly intervene the internal computations of the inference machine without the need to modify the target inputs. This extra degree of manipulability offers more space of research exploration on the security threats, attack surfaces and countermeasures on modern DNN accelerators. New practical and robust hardware attack and fault recovery on large scale applications and real-word object classification scenarios of DNN accelerator are investigated, and error resilient DNN design are presented in this thesis.