Faster rates for compressed federated learning with client-variance reduction

Due to the communication bottleneck in distributed and federated learning applications, algorithms using communication compression have attracted significant attention and are widely used in practice. Moreover, the huge number, high heterogeneity, and limited availability of clients result in high c...

Full description

Saved in:
Bibliographic Details
Main Authors: ZHAO, Haoyu, BURLACHENKO, Konstantin, LI, Zhize, RICHTARIK, Peter
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2024
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/9607
https://ink.library.smu.edu.sg/context/sis_research/article/10607/viewcontent/SIMODS24_cofig_av.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
Description
Summary:Due to the communication bottleneck in distributed and federated learning applications, algorithms using communication compression have attracted significant attention and are widely used in practice. Moreover, the huge number, high heterogeneity, and limited availability of clients result in high client -variance. This paper addresses these two issues together by proposing compressed and clientvariance reduced methods COFIG and FRECON. We prove an O( (1+\omega)3/2\surdN+ (1+\omega)N2/3 S\epsilon2 S\epsilon2 ) bound on the number of communication rounds of COFIG in the nonconvex setting, where N is the total number of clients, S is the number of clients participating in each round, \epsilon is the convergence error, and \omega is the variance parameter associated with the compression operator. In case of FRECON, \surd we prove an O( (1+\omega) S\epsilon2 ) bound on the number of communication rounds. In the convex setting, N \surd COFIG converges within O((1+\omega) S\epsilon) communication rounds, which, to the best of our knowledge, N is also the first convergence result for compression schemes that do not communicate with all the clients in each round. We stress that neither COFIG nor FRECON needs to communicate with all the clients, and they enjoy the first or faster convergence results for convex and nonconvex federated learning in the regimes considered. Experimental results point to an empirical superiority of COFIG and FRECON over existing baselines.