Data-free generative model stealing – an experimental study

Model stealing attack refers to duplicating the functionalities of a deep learning model, which results in social or economic effect to model owner or leads to further attacks. Generative Artificial Intelligence is becoming more and more popular and influential, but compared to classification models...

Full description

Saved in:
Bibliographic Details
Main Author: Mao, Ruoyi
Other Authors: Lin Zhiping
Format: Final Year Project
Language:English
Published: Nanyang Technological University 2024
Subjects:
Online Access:https://hdl.handle.net/10356/176957
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-176957
record_format dspace
spelling sg-ntu-dr.10356-1769572024-05-24T15:44:17Z Data-free generative model stealing – an experimental study Mao, Ruoyi Lin Zhiping School of Electrical and Electronic Engineering Institute for Infocomm Research (I2R) EZPLin@ntu.edu.sg Computer and Information Science Model stealing attack refers to duplicating the functionalities of a deep learning model, which results in social or economic effect to model owner or leads to further attacks. Generative Artificial Intelligence is becoming more and more popular and influential, but compared to classification models and image translation models, there is less research on the stealing and protection of image generative models. This report investigates whether the functionalities of a deep learning black-box generative model can also be stolen without private training data, which is referred to as “Data-Free Generative Model Stealing”. Through research, experiments and quantitative comparisons, we successfully implemented stealing using Generative Adversarial Network and Diffusion Model in the image domain of MNIST handwritten digits, giving a deeper understanding of the effectiveness and cost factors of generative model stealing attack. Stronger surrogate models and simpler image domains easily achieved better results in stealing, and proper image augmentation methods could further improve it. Discussions on the impacts of dataset size and manual cleaning indicated the low cost of stealing attacks. These are expected to provide insight into future studies on the analysis and protection of deep learning generative models. Bachelor's degree 2024-05-23T08:07:01Z 2024-05-23T08:07:01Z 2024 Final Year Project (FYP) Mao, R. (2024). Data-free generative model stealing – an experimental study. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/176957 https://hdl.handle.net/10356/176957 en application/pdf Nanyang Technological University
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Computer and Information Science
spellingShingle Computer and Information Science
Mao, Ruoyi
Data-free generative model stealing – an experimental study
description Model stealing attack refers to duplicating the functionalities of a deep learning model, which results in social or economic effect to model owner or leads to further attacks. Generative Artificial Intelligence is becoming more and more popular and influential, but compared to classification models and image translation models, there is less research on the stealing and protection of image generative models. This report investigates whether the functionalities of a deep learning black-box generative model can also be stolen without private training data, which is referred to as “Data-Free Generative Model Stealing”. Through research, experiments and quantitative comparisons, we successfully implemented stealing using Generative Adversarial Network and Diffusion Model in the image domain of MNIST handwritten digits, giving a deeper understanding of the effectiveness and cost factors of generative model stealing attack. Stronger surrogate models and simpler image domains easily achieved better results in stealing, and proper image augmentation methods could further improve it. Discussions on the impacts of dataset size and manual cleaning indicated the low cost of stealing attacks. These are expected to provide insight into future studies on the analysis and protection of deep learning generative models.
author2 Lin Zhiping
author_facet Lin Zhiping
Mao, Ruoyi
format Final Year Project
author Mao, Ruoyi
author_sort Mao, Ruoyi
title Data-free generative model stealing – an experimental study
title_short Data-free generative model stealing – an experimental study
title_full Data-free generative model stealing – an experimental study
title_fullStr Data-free generative model stealing – an experimental study
title_full_unstemmed Data-free generative model stealing – an experimental study
title_sort data-free generative model stealing – an experimental study
publisher Nanyang Technological University
publishDate 2024
url https://hdl.handle.net/10356/176957
_version_ 1814047403308744704