Reverse engineering deep learning algorithms

Cybersecurity concerns surrounding edge Field Programmable Gate Arrays (FPGA) accelerators hosting deep neural network (DNN) architectures have become increasingly prominent due to the vulnerability of such systems to side-channel reverse engineering attacks (SCAs). In this report, we present inn...

Full description

Saved in:
Bibliographic Details
Main Author: Muhammad Irfan Bin Norizzam
Other Authors: Lam Siew Kei
Format: Final Year Project
Language:English
Published: Nanyang Technological University 2024
Subjects:
Online Access:https://hdl.handle.net/10356/175058
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:Cybersecurity concerns surrounding edge Field Programmable Gate Arrays (FPGA) accelerators hosting deep neural network (DNN) architectures have become increasingly prominent due to the vulnerability of such systems to side-channel reverse engineering attacks (SCAs). In this report, we present innovative defense mechanisms aimed at thwarting SCAs on edge FPGA accelerators. Our study comprises two key experiments: (1) Scheduling Measures against Hardware Trojan SCAs and (2) Dummy Layer Obfuscator. We first performed strategic modifications to the VTA Convolution 2D script to introduce variability in computation order without compromising mathematical equivalence. While significant disruptions to attack data volumes and layer identification are observed, the study highlights a trade-off between security measures and computational efficiency. Next, we introduced a Dummy Layer Obfuscator by strategically inserting dummy convolutional layers into the DNN architecture to obscure its structure. This approach successfully hinders attackers' ability to discern critical parameters, albeit with certain limitations in layer placement and type. Our findings underscore the importance of integrating robust security measures into the design of FPGA-based DNN accelerators to safeguard against potential threats and uphold model confidentiality. While our proposed defenses demonstrate effectiveness in thwarting side-channel attacks, they also incur additional computational overhead. Future research endeavors should focus on mitigating these overheads while preserving the security benefits of the proposed solutions.