Using model optimization as countermeasure against model recovery attacks

Machine learning (ML) and Deep learning (DL) have been widely studied and adopted for different applications across various fields. There is a growing demand for ML implementations as well as ML accelerators for small devices for Internet-of-Things (IoT) applications. Often, these accelerators allow...

Full description

Saved in:
Bibliographic Details
Main Authors: Jap, Dirmanto, Bhasin, Shivam
Other Authors: Applied Cryptography and Network Security Workshops (ACNS 2023)
Format: Conference or Workshop Item
Language:English
Published: 2024
Subjects:
Online Access:https://hdl.handle.net/10356/173621
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:Machine learning (ML) and Deep learning (DL) have been widely studied and adopted for different applications across various fields. There is a growing demand for ML implementations as well as ML accelerators for small devices for Internet-of-Things (IoT) applications. Often, these accelerators allow efficient edge-based inference based on pre-trained deep neural network models for IoT setting. First, the model will be trained separately on a more powerful machine and then deployed on the edge device for inference. However, there are several attacks reported that could recover and steal the pre-trained model. For example, recently an attack was reported on edge-based machine learning accelerator demonstrated recovery of target neural network models (architecture and weights) using cold-boot attack. Using this information, the adversary can reconstruct the model, albeit with certain errors due to the corruption of the data during the recovery process. Hence, this indicate potential vulnerability of implementation of ML/DL model on edge devices for IoT applications. In this work, we investigate generic countermeasures for model recovery attacks, based on neural network (NN) model optimization technique, such as quantization, binarization, pruning, etc. We first study and investigate the performance improvement offered and how these transformations could help in mitigating the model recovery process. Our experimental results show that model optimization methods, in addition to achieving better performance, can result in accuracy degradation which help to mitigate model recovery attacks.