Gradient inversion-based inference attack against federated learning

Federated learning is a state-of-the-art paradigm where deep learning models based on servers can be trained without having direct access to private training data. In federated learning, clients transfer gradients to the server, which can be used to further improve the model. However, the gradients...

Full description

Saved in:
Bibliographic Details
Main Author: Chan, Joel Yuan Wei
Other Authors: Chang Chip Hong
Format: Final Year Project
Language:English
Published: Nanyang Technological University 2023
Subjects:
Online Access:https://hdl.handle.net/10356/172760
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-172760
record_format dspace
spelling sg-ntu-dr.10356-1727602023-12-22T15:43:40Z Gradient inversion-based inference attack against federated learning Chan, Joel Yuan Wei Chang Chip Hong School of Electrical and Electronic Engineering ECHChang@ntu.edu.sg Engineering::Electrical and electronic engineering Federated learning is a state-of-the-art paradigm where deep learning models based on servers can be trained without having direct access to private training data. In federated learning, clients transfer gradients to the server, which can be used to further improve the model. However, the gradients transferred are susceptible to leaking the private data to the server, and this is a concern in many real-life applications, such as medical image classification. This attack is called gradient inversion. In this project, a specific gradient inversion attack, using generative adversarial networks to generate an image prior, will be implemented on a simulated federated learning paradigm. By obtaining the gradients, this project will demonstrate how human facial images can be reconstructed simply from those gradients, thereby showing that federated learning is not a privacy-preserving paradigm. Analysis of the experimental data also shows that increasing the batch size or the image dimensions can affect the quality of the reconstructed images. Lastly, some suggestions on future work pertaining to implementation of federated learning in language models, along with gradient inversion defense techniques, are discussed. Bachelor of Engineering (Information Engineering and Media) 2023-12-19T23:51:40Z 2023-12-19T23:51:40Z 2023 Final Year Project (FYP) Chan, J. Y. W. (2023). Gradient inversion-based inference attack against federated learning. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/172760 https://hdl.handle.net/10356/172760 en A2308-222 application/pdf Nanyang Technological University
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Engineering::Electrical and electronic engineering
spellingShingle Engineering::Electrical and electronic engineering
Chan, Joel Yuan Wei
Gradient inversion-based inference attack against federated learning
description Federated learning is a state-of-the-art paradigm where deep learning models based on servers can be trained without having direct access to private training data. In federated learning, clients transfer gradients to the server, which can be used to further improve the model. However, the gradients transferred are susceptible to leaking the private data to the server, and this is a concern in many real-life applications, such as medical image classification. This attack is called gradient inversion. In this project, a specific gradient inversion attack, using generative adversarial networks to generate an image prior, will be implemented on a simulated federated learning paradigm. By obtaining the gradients, this project will demonstrate how human facial images can be reconstructed simply from those gradients, thereby showing that federated learning is not a privacy-preserving paradigm. Analysis of the experimental data also shows that increasing the batch size or the image dimensions can affect the quality of the reconstructed images. Lastly, some suggestions on future work pertaining to implementation of federated learning in language models, along with gradient inversion defense techniques, are discussed.
author2 Chang Chip Hong
author_facet Chang Chip Hong
Chan, Joel Yuan Wei
format Final Year Project
author Chan, Joel Yuan Wei
author_sort Chan, Joel Yuan Wei
title Gradient inversion-based inference attack against federated learning
title_short Gradient inversion-based inference attack against federated learning
title_full Gradient inversion-based inference attack against federated learning
title_fullStr Gradient inversion-based inference attack against federated learning
title_full_unstemmed Gradient inversion-based inference attack against federated learning
title_sort gradient inversion-based inference attack against federated learning
publisher Nanyang Technological University
publishDate 2023
url https://hdl.handle.net/10356/172760
_version_ 1787136768233963520