TravellingFL: communication efficient peer-to-peer federated learning

Peer-to-Peer federated learning is a distributed machine learning paradigm with a primary goal of learning a well-performing global model by collaboratively learning a shared model at different data hubs without the need of sharing data. Due to its immense practical applications, there is growing at...

全面介紹

Saved in:
書目詳細資料
Main Authors: Gupta, Vansh, Luqman, Alka, Chattopadhyay, Nandish, Chattopadhyay, Anupam, Niyato, Dusit
其他作者: School of Computer Science and Engineering
格式: Article
語言:English
出版: 2024
主題:
在線閱讀:https://hdl.handle.net/10356/173391
標簽: 添加標簽
沒有標簽, 成為第一個標記此記錄!
實物特徵
總結:Peer-to-Peer federated learning is a distributed machine learning paradigm with a primary goal of learning a well-performing global model by collaboratively learning a shared model at different data hubs without the need of sharing data. Due to its immense practical applications, there is growing attention towards various challenges of efficient federated learning including communication efficiency, assumptions on connectivity, data heterogeneity, enhanced privacy, etc. In this paper, we address the problem of dynamic network topologies in federated learning. We present a technique to help new participants in Peer-to-Peer federated learning reach best possible accuracy by leveraging learning at other devices in a communication efficient manner. We model the costs in federated learning and apply a graph theoretical framework to show that one can draw from a range of graph-based algorithms to construct an efficient communication algorithm on a connected network, thereby matching the inference efficiency of centralized federated learning. We conduct experiments with varied graph formations and sizes to validate our claims.