Gradient inversion-based inference attack against federated learning
Federated learning is a state-of-the-art paradigm where deep learning models based on servers can be trained without having direct access to private training data. In federated learning, clients transfer gradients to the server, which can be used to further improve the model. However, the gradients...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Final Year Project |
Language: | English |
Published: |
Nanyang Technological University
2023
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/172760 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | Federated learning is a state-of-the-art paradigm where deep learning models based on servers can be trained without having direct access to private training data. In federated learning, clients transfer gradients to the server, which can be used to further improve the model. However, the gradients transferred are susceptible to leaking the private data to the server, and this is a concern in many real-life applications, such as medical image classification. This attack is called gradient inversion.
In this project, a specific gradient inversion attack, using generative adversarial networks to generate an image prior, will be implemented on a simulated federated learning paradigm. By obtaining the gradients, this project will demonstrate how human facial images can be reconstructed simply from those gradients, thereby showing that federated learning is not a privacy-preserving paradigm. Analysis of the experimental data also shows that increasing the batch size or the image dimensions can affect the quality of the reconstructed images. Lastly, some suggestions on future work pertaining to implementation of federated learning in language models, along with gradient inversion defense techniques, are discussed. |
---|