Fault-injection based attacks and countermeasure on deep neural network accelerators

The rapid development of deep learning accelerator has unlocked new applications that require local inference at the edge device. However, this trend of development to facilitate edge intelligence also invites new hardware-oriented attacks, which are different from and have more dreadful impact than...

وصف كامل

محفوظ في:
التفاصيل البيبلوغرافية
المؤلف الرئيسي: Liu, Wenye
مؤلفون آخرون: Chang Chip Hong
التنسيق: Thesis-Doctor of Philosophy
اللغة:English
منشور في: Nanyang Technological University 2021
الموضوعات:
الوصول للمادة أونلاين:https://hdl.handle.net/10356/152080
الوسوم: إضافة وسم
لا توجد وسوم, كن أول من يضع وسما على هذه التسجيلة!
الوصف
الملخص:The rapid development of deep learning accelerator has unlocked new applications that require local inference at the edge device. However, this trend of development to facilitate edge intelligence also invites new hardware-oriented attacks, which are different from and have more dreadful impact than the well-known adversarial examples. Existing hardware-based attacks on DNN focuses on model interpolation. Many of these attacks are limited to general-purpose processor instances or DNN accelerators on small scale applications. Hardware-oriented attacks can directly intervene the internal computations of the inference machine without the need to modify the target inputs. This extra degree of manipulability offers more space of research exploration on the security threats, attack surfaces and countermeasures on modern DNN accelerators. New practical and robust hardware attack and fault recovery on large scale applications and real-word object classification scenarios of DNN accelerator are investigated, and error resilient DNN design are presented in this thesis.