Model extraction attack on Deep Neural Networks

Machine learning models based on Deep Neural Networks (DNN) have gained popularity due to their promising performance and recent advancements in hardware. Development of high-performing DNN models requires a mass amount of time and resources, therefore, information regarding such models is kept...

全面介紹

Saved in:
書目詳細資料
主要作者: Lkhagvadorj, Dulguun
其他作者: Chang Chip Hong
格式: Final Year Project
語言:English
出版: Nanyang Technological University 2022
主題:
在線閱讀:https://hdl.handle.net/10356/158375
標簽: 添加標簽
沒有標簽, 成為第一個標記此記錄!
機構: Nanyang Technological University
語言: English
實物特徵
總結:Machine learning models based on Deep Neural Networks (DNN) have gained popularity due to their promising performance and recent advancements in hardware. Development of high-performing DNN models requires a mass amount of time and resources, therefore, information regarding such models is kept undisclosed in commercial settings. Hence, as an attacker, obtaining details of such hidden models at a low cost would be beneficial both financially and timewise. In this project, we studied different methods to attack black-box DNN models and experimented with two different methods. The first method aims at developing a substitute model with similar performances as the target model by using the target model’s outputs as training data for the substitute model. The second method focuses on obtaining structural information of the target through a timing side-channel attack. This report includes the theoretical basis of the methods, details of implementations, results of experiments, and discussions of the advantages and shortcomings of each method.