Is aggregation the only choice? Federated learning via layer-wise model recombination
Although Federated Learning (FL) enables global model training Xiaofei Xie xfxie@smu.edu.sg Singapore Management University Singapore, Singapore Xian Wei xwei@sei.ecnu.edu.cn East China Normal University Shanghai, China Mingsong Chen∗ mschen@sei.ecnu.edu.cn East China Normal University Shanghai, Chi...
Saved in:
Main Authors: | , , , |
---|---|
Format: | text |
Language: | English |
Published: |
Institutional Knowledge at Singapore Management University
2024
|
Subjects: | |
Online Access: | https://ink.library.smu.edu.sg/sis_research/9507 https://ink.library.smu.edu.sg/context/sis_research/article/10507/viewcontent/Is_Aggregation_the_Only_Choice__Federated_Learning_via_Layer_wise_Model_Recombination.pdf |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Singapore Management University |
Language: | English |
Summary: | Although Federated Learning (FL) enables global model training Xiaofei Xie xfxie@smu.edu.sg Singapore Management University Singapore, Singapore Xian Wei xwei@sei.ecnu.edu.cn East China Normal University Shanghai, China Mingsong Chen∗ mschen@sei.ecnu.edu.cn East China Normal University Shanghai, China • Computing methodologies → Distributed artificial intelligence. across clients without compromising their raw data, due to the unevenly distributed data among clients, existing Federated Averaging (FedAvg)-based methods suffer from the problem of low inference performance. Specifically, different data distributions among clients lead to various optimization directions of local models. Aggregating local models usually results in a low-generalized global model, which performs worse on most of the clients. To address the above issue, inspired by the observation from a geometric perspective that a well-generalized solution is located in a flat area rather than a sharp area, we propose a novel and heuristic FL paradigm named FedMR (Federated Model Recombination). The goal of FedMR is to guide the recombined models to be trained towards a flat area. Unlike conventional FedAvg-based methods, in FedMR, the cloud server recombines collected local models by shuffling each layer of them to generate multiple recombined models for local training on clients rather than an aggregated global model. Since the area of the f lat area is larger than the sharp area, when local models are located in different areas, recombined models have a higher probability of locating in a flat area. When all recombined models are located in the same flat area, they are optimized towards the same direction. Wetheoretically analyze the convergence of model recombination. Experimental results show that, compared with state-of-the-art FL methods, FedMR can significantly improve the inference accuracy without exposing the privacy of each client. |
---|