Protecting deep learning algorithms from model theft

The rise of Deep Neural Network architectures deployed on edge Field Programmable Gate Arrays has introduced new security challenges. Such attacks can potentially reverse-engineer models, compromising their confidentiality and integrity. In this report, we present a defence mechanism aimed at...

Full description

Saved in:
Bibliographic Details
Main Author: Pang, Song Chen
Other Authors: Lam Siew Kei
Format: Final Year Project
Language:English
Published: Nanyang Technological University 2024
Subjects:
Online Access:https://hdl.handle.net/10356/181174
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-181174
record_format dspace
spelling sg-ntu-dr.10356-1811742024-11-18T01:42:44Z Protecting deep learning algorithms from model theft Pang, Song Chen Lam Siew Kei College of Computing and Data Science ASSKLam@ntu.edu.sg Computer and Information Science Deep neural networks The rise of Deep Neural Network architectures deployed on edge Field Programmable Gate Arrays has introduced new security challenges. Such attacks can potentially reverse-engineer models, compromising their confidentiality and integrity. In this report, we present a defence mechanism aimed at protecting DNNs deployed on edge devices against adversarial attacks. Although the initial goal was to address Side-Channel Attacks, the current implementation effectively safeguards against memory confidentiality and integrity attacks. Our work focuses on the integration of a Memory Integrity Tree within the Versatile Tensor Accelerator to secure memory accesses and detect unauthorized modifications during DNN execution. Key modifications were made to the VTA’s runtime code, specifically the LoadBuffer2D and StoreBuffer2D functions, to enforce memory integrity checks through a Binary Merkle Tree. This structure ensures that each memory block is hashed and verified, maintaining a secure execution environment. The implemented defences were evaluated in terms of performance overhead, while the MIT effectively prevents memory attacks, such as replay attacks, by detecting tampering attempts and protects the DNN model hyperparameters. The integration of cryptographic hash calculations introduced a significant performance cost. Our findings highlight the trade-offs between security and computational efficiency, emphasising the importance of continued refinement to minimize overhead while preserving robust protection against SCAs. This project demonstrates the viability of enhancing security for FPGA-based DNN accelerators through memory integrity checks. Future research should explore optimizations to reduce performance overhead and extend protections to side-channel attacks. Bachelor's degree 2024-11-18T01:42:44Z 2024-11-18T01:42:44Z 2024 Final Year Project (FYP) Pang, S. C. (2024). Protecting deep learning algorithms from model theft. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/181174 https://hdl.handle.net/10356/181174 en application/pdf Nanyang Technological University
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Computer and Information Science
Deep neural networks
spellingShingle Computer and Information Science
Deep neural networks
Pang, Song Chen
Protecting deep learning algorithms from model theft
description The rise of Deep Neural Network architectures deployed on edge Field Programmable Gate Arrays has introduced new security challenges. Such attacks can potentially reverse-engineer models, compromising their confidentiality and integrity. In this report, we present a defence mechanism aimed at protecting DNNs deployed on edge devices against adversarial attacks. Although the initial goal was to address Side-Channel Attacks, the current implementation effectively safeguards against memory confidentiality and integrity attacks. Our work focuses on the integration of a Memory Integrity Tree within the Versatile Tensor Accelerator to secure memory accesses and detect unauthorized modifications during DNN execution. Key modifications were made to the VTA’s runtime code, specifically the LoadBuffer2D and StoreBuffer2D functions, to enforce memory integrity checks through a Binary Merkle Tree. This structure ensures that each memory block is hashed and verified, maintaining a secure execution environment. The implemented defences were evaluated in terms of performance overhead, while the MIT effectively prevents memory attacks, such as replay attacks, by detecting tampering attempts and protects the DNN model hyperparameters. The integration of cryptographic hash calculations introduced a significant performance cost. Our findings highlight the trade-offs between security and computational efficiency, emphasising the importance of continued refinement to minimize overhead while preserving robust protection against SCAs. This project demonstrates the viability of enhancing security for FPGA-based DNN accelerators through memory integrity checks. Future research should explore optimizations to reduce performance overhead and extend protections to side-channel attacks.
author2 Lam Siew Kei
author_facet Lam Siew Kei
Pang, Song Chen
format Final Year Project
author Pang, Song Chen
author_sort Pang, Song Chen
title Protecting deep learning algorithms from model theft
title_short Protecting deep learning algorithms from model theft
title_full Protecting deep learning algorithms from model theft
title_fullStr Protecting deep learning algorithms from model theft
title_full_unstemmed Protecting deep learning algorithms from model theft
title_sort protecting deep learning algorithms from model theft
publisher Nanyang Technological University
publishDate 2024
url https://hdl.handle.net/10356/181174
_version_ 1816858936402247680