Membership inference vulnerabilities in peer-to-peer federated learning

Federated learning is emerging as an efficient approach to exploit data silos that form due to regulations about data sharing and usage, thereby leveraging distributed resources to improve the learning of ML models. It is a fitting technology for cyber physical systems in applications like connected...

Full description

Saved in:
Bibliographic Details
Main Authors: Luqman, Alka, Chattopadhyay, Anupam, Lam Kwok-Yan
Other Authors: School of Computer Science and Engineering
Format: Conference or Workshop Item
Language:English
Published: 2024
Subjects:
Online Access:https://hdl.handle.net/10356/173390
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-173390
record_format dspace
spelling sg-ntu-dr.10356-1733902024-02-02T15:35:03Z Membership inference vulnerabilities in peer-to-peer federated learning Luqman, Alka Chattopadhyay, Anupam Lam Kwok-Yan School of Computer Science and Engineering 2023 Secure and Trustworthy Deep Learning Systems Workshop (SecTL '23) Strategic Centre for Research in Privacy-Preserving Technologies & Systems (SCRIPTS) Computer and Information Science Federated Learning Neural Networks Federated learning is emerging as an efficient approach to exploit data silos that form due to regulations about data sharing and usage, thereby leveraging distributed resources to improve the learning of ML models. It is a fitting technology for cyber physical systems in applications like connected autonomous vehicles, smart farming, IoT surveillance etc. By design, every participant in federated learning has access to the latest ML model. In such a scenario, it becomes all the more important to protect the model's knowledge, and to keep the training data and its properties private. In this paper, we survey the literature of ML attacks to assess the risks that apply in a peer-to-peer (P2P) federated learning setup. We perform membership inference attacks specifically in a P2P federated learning setting with colluding adversaries to evaluate the privacy-accuracy trade offs in a deep neural network thus demonstrating the extent of data leakage possible. National Research Foundation (NRF) Published version This research is supported by the National Research Foundation, Singapore under its Strategic Capability Research Centres Funding Initiative. 2024-02-02T05:12:44Z 2024-02-02T05:12:44Z 2023 Conference Paper Luqman, A., Chattopadhyay, A. & Lam Kwok-Yan (2023). Membership inference vulnerabilities in peer-to-peer federated learning. 2023 Secure and Trustworthy Deep Learning Systems Workshop (SecTL '23), July 2023, 6-. https://dx.doi.org/10.1145/3591197.3593638 9798400701818 https://hdl.handle.net/10356/173390 10.1145/3591197.3593638 2-s2.0-85168559744 July 2023 6 en © 2023 Copyright held by the owner/author(s). This work is licensed under a Creative Commons Attribution-NonCommercial International 4.0 License. application/pdf
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Computer and Information Science
Federated Learning
Neural Networks
spellingShingle Computer and Information Science
Federated Learning
Neural Networks
Luqman, Alka
Chattopadhyay, Anupam
Lam Kwok-Yan
Membership inference vulnerabilities in peer-to-peer federated learning
description Federated learning is emerging as an efficient approach to exploit data silos that form due to regulations about data sharing and usage, thereby leveraging distributed resources to improve the learning of ML models. It is a fitting technology for cyber physical systems in applications like connected autonomous vehicles, smart farming, IoT surveillance etc. By design, every participant in federated learning has access to the latest ML model. In such a scenario, it becomes all the more important to protect the model's knowledge, and to keep the training data and its properties private. In this paper, we survey the literature of ML attacks to assess the risks that apply in a peer-to-peer (P2P) federated learning setup. We perform membership inference attacks specifically in a P2P federated learning setting with colluding adversaries to evaluate the privacy-accuracy trade offs in a deep neural network thus demonstrating the extent of data leakage possible.
author2 School of Computer Science and Engineering
author_facet School of Computer Science and Engineering
Luqman, Alka
Chattopadhyay, Anupam
Lam Kwok-Yan
format Conference or Workshop Item
author Luqman, Alka
Chattopadhyay, Anupam
Lam Kwok-Yan
author_sort Luqman, Alka
title Membership inference vulnerabilities in peer-to-peer federated learning
title_short Membership inference vulnerabilities in peer-to-peer federated learning
title_full Membership inference vulnerabilities in peer-to-peer federated learning
title_fullStr Membership inference vulnerabilities in peer-to-peer federated learning
title_full_unstemmed Membership inference vulnerabilities in peer-to-peer federated learning
title_sort membership inference vulnerabilities in peer-to-peer federated learning
publishDate 2024
url https://hdl.handle.net/10356/173390
_version_ 1789968705388544000