Fault-injection based attacks and countermeasure on deep neural network accelerators

The rapid development of deep learning accelerator has unlocked new applications that require local inference at the edge device. However, this trend of development to facilitate edge intelligence also invites new hardware-oriented attacks, which are different from and have more dreadful impact than...

Full description

Saved in:
Bibliographic Details
Main Author: Liu, Wenye
Other Authors: Chang Chip Hong
Format: Thesis-Doctor of Philosophy
Language:English
Published: Nanyang Technological University 2021
Subjects:
Online Access:https://hdl.handle.net/10356/152080
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:The rapid development of deep learning accelerator has unlocked new applications that require local inference at the edge device. However, this trend of development to facilitate edge intelligence also invites new hardware-oriented attacks, which are different from and have more dreadful impact than the well-known adversarial examples. Existing hardware-based attacks on DNN focuses on model interpolation. Many of these attacks are limited to general-purpose processor instances or DNN accelerators on small scale applications. Hardware-oriented attacks can directly intervene the internal computations of the inference machine without the need to modify the target inputs. This extra degree of manipulability offers more space of research exploration on the security threats, attack surfaces and countermeasures on modern DNN accelerators. New practical and robust hardware attack and fault recovery on large scale applications and real-word object classification scenarios of DNN accelerator are investigated, and error resilient DNN design are presented in this thesis.