Reverse engineering deep learning algorithms

Cybersecurity concerns surrounding edge Field Programmable Gate Arrays (FPGA) accelerators hosting deep neural network (DNN) architectures have become increasingly prominent due to the vulnerability of such systems to side-channel reverse engineering attacks (SCAs). In this report, we present inn...

وصف كامل

محفوظ في:
التفاصيل البيبلوغرافية
المؤلف الرئيسي: Muhammad Irfan Bin Norizzam
مؤلفون آخرون: Lam Siew Kei
التنسيق: Final Year Project
اللغة:English
منشور في: Nanyang Technological University 2024
الموضوعات:
الوصول للمادة أونلاين:https://hdl.handle.net/10356/175058
الوسوم: إضافة وسم
لا توجد وسوم, كن أول من يضع وسما على هذه التسجيلة!
المؤسسة: Nanyang Technological University
اللغة: English
الوصف
الملخص:Cybersecurity concerns surrounding edge Field Programmable Gate Arrays (FPGA) accelerators hosting deep neural network (DNN) architectures have become increasingly prominent due to the vulnerability of such systems to side-channel reverse engineering attacks (SCAs). In this report, we present innovative defense mechanisms aimed at thwarting SCAs on edge FPGA accelerators. Our study comprises two key experiments: (1) Scheduling Measures against Hardware Trojan SCAs and (2) Dummy Layer Obfuscator. We first performed strategic modifications to the VTA Convolution 2D script to introduce variability in computation order without compromising mathematical equivalence. While significant disruptions to attack data volumes and layer identification are observed, the study highlights a trade-off between security measures and computational efficiency. Next, we introduced a Dummy Layer Obfuscator by strategically inserting dummy convolutional layers into the DNN architecture to obscure its structure. This approach successfully hinders attackers' ability to discern critical parameters, albeit with certain limitations in layer placement and type. Our findings underscore the importance of integrating robust security measures into the design of FPGA-based DNN accelerators to safeguard against potential threats and uphold model confidentiality. While our proposed defenses demonstrate effectiveness in thwarting side-channel attacks, they also incur additional computational overhead. Future research endeavors should focus on mitigating these overheads while preserving the security benefits of the proposed solutions.