Practical attribute reconstruction attack against federated learning
Existing federated learning (FL) designs have been shown to exhibit vulnerabilities which can be exploited by adversaries to compromise data privacy. However, most current works conduct attacks by leveraging gradients calculated on a small batch of data. This setting is not realistic as gradients ar...
Saved in:
Main Authors: | , , , |
---|---|
Other Authors: | |
Format: | Article |
Language: | English |
Published: |
2024
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/179056 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
id |
sg-ntu-dr.10356-179056 |
---|---|
record_format |
dspace |
spelling |
sg-ntu-dr.10356-1790562024-07-18T00:18:12Z Practical attribute reconstruction attack against federated learning Chen, Chen Lyu, Lingjuan Yu, Han Chen, Gang College of Computing and Data Science School of Computer Science and Engineering Computer and Information Science Artificial intelligence Federated learning Existing federated learning (FL) designs have been shown to exhibit vulnerabilities which can be exploited by adversaries to compromise data privacy. However, most current works conduct attacks by leveraging gradients calculated on a small batch of data. This setting is not realistic as gradients are normally shared after at least 1 epoch of local training on each participant's local data in FL for communication efficiency. In this work, we conduct a unique systematic evaluation of attribute reconstruction attack (ARA) launched by the malicious server in the FL system, and empirically demonstrate that the shared local model gradients after 1 epoch of local training can still reveal sensitive attributes of local training data. To demonstrate this leakage, we develop a more effective and efficient gradient matching based method called cos-matching to reconstruct the sensitive attributes of any victim participant's training data. Based on the reconstructed training data attributes, we further show that an attacker can even reconstruct the sensitive attributes of any records that are not included in any participant's training data, thus opening a new attack surface in FL. Extensive experiments show that the proposed method achieves better attribute attack performance than existing state-of-the-art methods. Agency for Science, Technology and Research (A*STAR) AI Singapore Nanyang Technological University Submitted/Accepted version This research is supported by the Key Research and Development Program of Zhejiang Province of China (No. 2020C01024); the NSF of China Grant No. 62050099; the Natural Science Foundation of Zhejiang Province of China (No. LY18F020005); the National Research Foundation, Singapore under its the AI Singapore Programme (AISG2- RP-2020-019); the Joint NTU-WeBank Research Centre on Fintech (NWJ-2020-008); the Nanyang Assistant Professorships (NAP); the RIE 2020 Advanced Manufacturing and Engineering Programmatic Fund (A20G8b0102), Singapore; and the SDU-NTU Centre for AI Research (C-FAIR). 2024-07-18T00:18:12Z 2024-07-18T00:18:12Z 2022 Journal Article Chen, C., Lyu, L., Yu, H. & Chen, G. (2022). Practical attribute reconstruction attack against federated learning. IEEE Transactions On Big Data. https://dx.doi.org/10.1109/TBDATA.2022.3159236 2332-7790 https://hdl.handle.net/10356/179056 10.1109/TBDATA.2022.3159236 en AISG2-RP-2020-019 A20G8b0102 NWJ-2020-008 IEEE Transactions on Big Data © 2021 IEEE. All rights reserved. This article may be downloaded for personal use only. Any other use requires prior permission of the copyright holder. The Version of Record is available online at http://doi.org/10.1109/TBDATA.2022.3159236. application/pdf |
institution |
Nanyang Technological University |
building |
NTU Library |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
NTU Library |
collection |
DR-NTU |
language |
English |
topic |
Computer and Information Science Artificial intelligence Federated learning |
spellingShingle |
Computer and Information Science Artificial intelligence Federated learning Chen, Chen Lyu, Lingjuan Yu, Han Chen, Gang Practical attribute reconstruction attack against federated learning |
description |
Existing federated learning (FL) designs have been shown to exhibit vulnerabilities which can be exploited by adversaries to compromise data privacy. However, most current works conduct attacks by leveraging gradients calculated on a small batch of data. This setting is not realistic as gradients are normally shared after at least 1 epoch of local training on each participant's local data in FL for communication efficiency. In this work, we conduct a unique systematic evaluation of attribute reconstruction attack (ARA) launched by the malicious server in the FL system, and empirically demonstrate that the shared local model gradients after 1 epoch of local training can still reveal sensitive attributes of local training data. To demonstrate this leakage, we develop a more effective and efficient gradient matching based method called cos-matching to reconstruct the sensitive attributes of any victim participant's training data. Based on the reconstructed training data attributes, we further show that an attacker can even reconstruct the sensitive attributes of any records that are not included in any participant's training data, thus opening a new attack surface in FL. Extensive experiments show that the proposed method achieves better attribute attack performance than existing state-of-the-art methods. |
author2 |
College of Computing and Data Science |
author_facet |
College of Computing and Data Science Chen, Chen Lyu, Lingjuan Yu, Han Chen, Gang |
format |
Article |
author |
Chen, Chen Lyu, Lingjuan Yu, Han Chen, Gang |
author_sort |
Chen, Chen |
title |
Practical attribute reconstruction attack against federated learning |
title_short |
Practical attribute reconstruction attack against federated learning |
title_full |
Practical attribute reconstruction attack against federated learning |
title_fullStr |
Practical attribute reconstruction attack against federated learning |
title_full_unstemmed |
Practical attribute reconstruction attack against federated learning |
title_sort |
practical attribute reconstruction attack against federated learning |
publishDate |
2024 |
url |
https://hdl.handle.net/10356/179056 |
_version_ |
1814047416241881088 |