Guaranteeing data privacy in federated unlearning with dynamic user participation
Federated Unlearning (FU) is gaining prominence for its capability to eliminate influences of specific users' data from trained global Federated Learning (FL) models. A straightforward FU method involves removing the unlearned user-specified data and subsequently obtaining a new global FL model...
Saved in:
Main Authors: | , , , , , |
---|---|
Other Authors: | |
Format: | Article |
Language: | English |
Published: |
2025
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/182940 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
id |
sg-ntu-dr.10356-182940 |
---|---|
record_format |
dspace |
spelling |
sg-ntu-dr.10356-1829402025-03-10T06:45:46Z Guaranteeing data privacy in federated unlearning with dynamic user participation Liu, Ziyao Jiang, Yu Jiang, Weifeng Guo, Jiale Zhao, Jun Lam, Kwok-Yan College of Computing and Data Science Computer and Information Science AI safety Digital trust Federated Unlearning (FU) is gaining prominence for its capability to eliminate influences of specific users' data from trained global Federated Learning (FL) models. A straightforward FU method involves removing the unlearned user-specified data and subsequently obtaining a new global FL model from scratch with all remaining user data, a process that unfortunately leads to considerable overhead. To enhance unlearning efficiency, a widely adopted strategy employs clustering, dividing FL users into clusters, with each cluster maintaining its own FL model. The final inference is then determined by aggregating the majority vote from the inferences of these sub-models. This method confines unlearning processes to individual clusters for removing the training data of a particular user, thereby enhancing unlearning efficiency by eliminating the need for participation from all remaining user data. However, current clustering-based FU schemes mainly concentrate on refining clustering to boost unlearning efficiency but without addressing the issue of the potential information leakage from FL users' gradients, a privacy concern that has been extensively studied. Typically, integrating secure aggregation (SecAgg) schemes within each cluster can facilitate a privacy-preserving FU. Nevertheless, crafting a clustering methodology that seamlessly incorporates SecAgg schemes is challenging, particularly in scenarios involving adversarial users and dynamic users. In this connection, we systematically explore the integration of SecAgg protocols within the most widely used federated unlearning scheme, which is based on clustering, to establish a privacy-preserving FU framework, aimed at ensuring privacy while effectively managing dynamic user participation. Comprehensive theoretical assessments and experimental results show that our proposed scheme achieves comparable unlearning effectiveness, alongside offering improved privacy protection and resilience in the face of varying user participation. 2025-03-10T06:45:46Z 2025-03-10T06:45:46Z 2024 Journal Article Liu, Z., Jiang, Y., Jiang, W., Guo, J., Zhao, J. & Lam, K. (2024). Guaranteeing data privacy in federated unlearning with dynamic user participation. IEEE Transactions On Dependable and Secure Computing, 3476533-. https://dx.doi.org/10.1109/TDSC.2024.3476533 1545-5971 https://hdl.handle.net/10356/182940 10.1109/TDSC.2024.3476533 2-s2.0-85207117385 3476533 en IEEE Transactions on Dependable and Secure Computing © 2024 IEEE. All rights reserved. |
institution |
Nanyang Technological University |
building |
NTU Library |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
NTU Library |
collection |
DR-NTU |
language |
English |
topic |
Computer and Information Science AI safety Digital trust |
spellingShingle |
Computer and Information Science AI safety Digital trust Liu, Ziyao Jiang, Yu Jiang, Weifeng Guo, Jiale Zhao, Jun Lam, Kwok-Yan Guaranteeing data privacy in federated unlearning with dynamic user participation |
description |
Federated Unlearning (FU) is gaining prominence for its capability to eliminate influences of specific users' data from trained global Federated Learning (FL) models. A straightforward FU method involves removing the unlearned user-specified data and subsequently obtaining a new global FL model from scratch with all remaining user data, a process that unfortunately leads to considerable overhead. To enhance unlearning efficiency, a widely adopted strategy employs clustering, dividing FL users into clusters, with each cluster maintaining its own FL model. The final inference is then determined by aggregating the majority vote from the inferences of these sub-models. This method confines unlearning processes to individual clusters for removing the training data of a particular user, thereby enhancing unlearning efficiency by eliminating the need for participation from all remaining user data. However, current clustering-based FU schemes mainly concentrate on refining clustering to boost unlearning efficiency but without addressing the issue of the potential information leakage from FL users' gradients, a privacy concern that has been extensively studied. Typically, integrating secure aggregation (SecAgg) schemes within each cluster can facilitate a privacy-preserving FU. Nevertheless, crafting a clustering methodology that seamlessly incorporates SecAgg schemes is challenging, particularly in scenarios involving adversarial users and dynamic users. In this connection, we systematically explore the integration of SecAgg protocols within the most widely used federated unlearning scheme, which is based on clustering, to establish a privacy-preserving FU framework, aimed at ensuring privacy while effectively managing dynamic user participation. Comprehensive theoretical assessments and experimental results show that our proposed scheme achieves comparable unlearning effectiveness, alongside offering improved privacy protection and resilience in the face of varying user participation. |
author2 |
College of Computing and Data Science |
author_facet |
College of Computing and Data Science Liu, Ziyao Jiang, Yu Jiang, Weifeng Guo, Jiale Zhao, Jun Lam, Kwok-Yan |
format |
Article |
author |
Liu, Ziyao Jiang, Yu Jiang, Weifeng Guo, Jiale Zhao, Jun Lam, Kwok-Yan |
author_sort |
Liu, Ziyao |
title |
Guaranteeing data privacy in federated unlearning with dynamic user participation |
title_short |
Guaranteeing data privacy in federated unlearning with dynamic user participation |
title_full |
Guaranteeing data privacy in federated unlearning with dynamic user participation |
title_fullStr |
Guaranteeing data privacy in federated unlearning with dynamic user participation |
title_full_unstemmed |
Guaranteeing data privacy in federated unlearning with dynamic user participation |
title_sort |
guaranteeing data privacy in federated unlearning with dynamic user participation |
publishDate |
2025 |
url |
https://hdl.handle.net/10356/182940 |
_version_ |
1826362247078739968 |