Guaranteeing data privacy in federated unlearning with dynamic user participation

Federated Unlearning (FU) is gaining prominence for its capability to eliminate influences of specific users' data from trained global Federated Learning (FL) models. A straightforward FU method involves removing the unlearned user-specified data and subsequently obtaining a new global FL model...

Full description

Saved in:
Bibliographic Details
Main Authors: Liu, Ziyao, Jiang, Yu, Jiang, Weifeng, Guo, Jiale, Zhao, Jun, Lam, Kwok-Yan
Other Authors: College of Computing and Data Science
Format: Article
Language:English
Published: 2025
Subjects:
Online Access:https://hdl.handle.net/10356/182940
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:Federated Unlearning (FU) is gaining prominence for its capability to eliminate influences of specific users' data from trained global Federated Learning (FL) models. A straightforward FU method involves removing the unlearned user-specified data and subsequently obtaining a new global FL model from scratch with all remaining user data, a process that unfortunately leads to considerable overhead. To enhance unlearning efficiency, a widely adopted strategy employs clustering, dividing FL users into clusters, with each cluster maintaining its own FL model. The final inference is then determined by aggregating the majority vote from the inferences of these sub-models. This method confines unlearning processes to individual clusters for removing the training data of a particular user, thereby enhancing unlearning efficiency by eliminating the need for participation from all remaining user data. However, current clustering-based FU schemes mainly concentrate on refining clustering to boost unlearning efficiency but without addressing the issue of the potential information leakage from FL users' gradients, a privacy concern that has been extensively studied. Typically, integrating secure aggregation (SecAgg) schemes within each cluster can facilitate a privacy-preserving FU. Nevertheless, crafting a clustering methodology that seamlessly incorporates SecAgg schemes is challenging, particularly in scenarios involving adversarial users and dynamic users. In this connection, we systematically explore the integration of SecAgg protocols within the most widely used federated unlearning scheme, which is based on clustering, to establish a privacy-preserving FU framework, aimed at ensuring privacy while effectively managing dynamic user participation. Comprehensive theoretical assessments and experimental results show that our proposed scheme achieves comparable unlearning effectiveness, alongside offering improved privacy protection and resilience in the face of varying user participation.