Membership inference vulnerabilities in peer-to-peer federated learning
Federated learning is emerging as an efficient approach to exploit data silos that form due to regulations about data sharing and usage, thereby leveraging distributed resources to improve the learning of ML models. It is a fitting technology for cyber physical systems in applications like connected...
Saved in:
Main Authors: | , , |
---|---|
Other Authors: | |
Format: | Conference or Workshop Item |
Language: | English |
Published: |
2024
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/173390 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | Federated learning is emerging as an efficient approach to exploit data silos that form due to regulations about data sharing and usage, thereby leveraging distributed resources to improve the learning of ML models. It is a fitting technology for cyber physical systems in applications like connected autonomous vehicles, smart farming, IoT surveillance etc. By design, every participant in federated learning has access to the latest ML model. In such a scenario, it becomes all the more important to protect the model's knowledge, and to keep the training data and its properties private. In this paper, we survey the literature of ML attacks to assess the risks that apply in a peer-to-peer (P2P) federated learning setup. We perform membership inference attacks specifically in a P2P federated learning setting with colluding adversaries to evaluate the privacy-accuracy trade offs in a deep neural network thus demonstrating the extent of data leakage possible. |
---|