Model extraction attack on Deep Neural Networks

Machine learning models based on Deep Neural Networks (DNN) have gained popularity due to their promising performance and recent advancements in hardware. Development of high-performing DNN models requires a mass amount of time and resources, therefore, information regarding such models is kept...

Full description

Saved in:
Bibliographic Details
Main Author: Lkhagvadorj, Dulguun
Other Authors: Chang Chip Hong
Format: Final Year Project
Language:English
Published: Nanyang Technological University 2022
Subjects:
Online Access:https://hdl.handle.net/10356/158375
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-158375
record_format dspace
spelling sg-ntu-dr.10356-1583752023-07-07T19:24:40Z Model extraction attack on Deep Neural Networks Lkhagvadorj, Dulguun Chang Chip Hong School of Electrical and Electronic Engineering VIRTUS, IC Design Centre of Excellence ECHChang@ntu.edu.sg Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence Engineering::Computer science and engineering::Mathematics of computing::Probability and statistics Machine learning models based on Deep Neural Networks (DNN) have gained popularity due to their promising performance and recent advancements in hardware. Development of high-performing DNN models requires a mass amount of time and resources, therefore, information regarding such models is kept undisclosed in commercial settings. Hence, as an attacker, obtaining details of such hidden models at a low cost would be beneficial both financially and timewise. In this project, we studied different methods to attack black-box DNN models and experimented with two different methods. The first method aims at developing a substitute model with similar performances as the target model by using the target model’s outputs as training data for the substitute model. The second method focuses on obtaining structural information of the target through a timing side-channel attack. This report includes the theoretical basis of the methods, details of implementations, results of experiments, and discussions of the advantages and shortcomings of each method. Bachelor of Engineering (Information Engineering and Media) 2022-06-03T03:16:39Z 2022-06-03T03:16:39Z 2022 Final Year Project (FYP) Lkhagvadorj, D. (2022). Model extraction attack on Deep Neural Networks. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/158375 https://hdl.handle.net/10356/158375 en A2034-211 application/pdf Nanyang Technological University
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence
Engineering::Computer science and engineering::Mathematics of computing::Probability and statistics
spellingShingle Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence
Engineering::Computer science and engineering::Mathematics of computing::Probability and statistics
Lkhagvadorj, Dulguun
Model extraction attack on Deep Neural Networks
description Machine learning models based on Deep Neural Networks (DNN) have gained popularity due to their promising performance and recent advancements in hardware. Development of high-performing DNN models requires a mass amount of time and resources, therefore, information regarding such models is kept undisclosed in commercial settings. Hence, as an attacker, obtaining details of such hidden models at a low cost would be beneficial both financially and timewise. In this project, we studied different methods to attack black-box DNN models and experimented with two different methods. The first method aims at developing a substitute model with similar performances as the target model by using the target model’s outputs as training data for the substitute model. The second method focuses on obtaining structural information of the target through a timing side-channel attack. This report includes the theoretical basis of the methods, details of implementations, results of experiments, and discussions of the advantages and shortcomings of each method.
author2 Chang Chip Hong
author_facet Chang Chip Hong
Lkhagvadorj, Dulguun
format Final Year Project
author Lkhagvadorj, Dulguun
author_sort Lkhagvadorj, Dulguun
title Model extraction attack on Deep Neural Networks
title_short Model extraction attack on Deep Neural Networks
title_full Model extraction attack on Deep Neural Networks
title_fullStr Model extraction attack on Deep Neural Networks
title_full_unstemmed Model extraction attack on Deep Neural Networks
title_sort model extraction attack on deep neural networks
publisher Nanyang Technological University
publishDate 2022
url https://hdl.handle.net/10356/158375
_version_ 1772825843003817984