Faster rates for compressed federated learning with client-variance reduction

Due to the communication bottleneck in distributed and federated learning applications, algorithms using communication compression have attracted significant attention and are widely used in practice. Moreover, the huge number, high heterogeneity, and limited availability of clients result in high c...

Full description

Saved in:
Bibliographic Details
Main Authors: ZHAO, Haoyu, BURLACHENKO, Konstantin, LI, Zhize, RICHTARIK, Peter
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2024
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/9607
https://ink.library.smu.edu.sg/context/sis_research/article/10607/viewcontent/SIMODS24_cofig_av.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
id sg-smu-ink.sis_research-10607
record_format dspace
spelling sg-smu-ink.sis_research-106072024-11-23T15:56:02Z Faster rates for compressed federated learning with client-variance reduction ZHAO, Haoyu BURLACHENKO, Konstantin LI, Zhize RICHTARIK, Peter Due to the communication bottleneck in distributed and federated learning applications, algorithms using communication compression have attracted significant attention and are widely used in practice. Moreover, the huge number, high heterogeneity, and limited availability of clients result in high client -variance. This paper addresses these two issues together by proposing compressed and clientvariance reduced methods COFIG and FRECON. We prove an O( (1+\omega)3/2\surdN+ (1+\omega)N2/3 S\epsilon2 S\epsilon2 ) bound on the number of communication rounds of COFIG in the nonconvex setting, where N is the total number of clients, S is the number of clients participating in each round, \epsilon is the convergence error, and \omega is the variance parameter associated with the compression operator. In case of FRECON, \surd we prove an O( (1+\omega) S\epsilon2 ) bound on the number of communication rounds. In the convex setting, N \surd COFIG converges within O((1+\omega) S\epsilon) communication rounds, which, to the best of our knowledge, N is also the first convergence result for compression schemes that do not communicate with all the clients in each round. We stress that neither COFIG nor FRECON needs to communicate with all the clients, and they enjoy the first or faster convergence results for convex and nonconvex federated learning in the regimes considered. Experimental results point to an empirical superiority of COFIG and FRECON over existing baselines. 2024-03-01T08:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/9607 info:doi/10.1137/23M1553820 https://ink.library.smu.edu.sg/context/sis_research/article/10607/viewcontent/SIMODS24_cofig_av.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University federated learning distributed optimization communication compression variance reduction Databases and Information Systems Theory and Algorithms
institution Singapore Management University
building SMU Libraries
continent Asia
country Singapore
Singapore
content_provider SMU Libraries
collection InK@SMU
language English
topic federated learning
distributed optimization
communication compression
variance reduction
Databases and Information Systems
Theory and Algorithms
spellingShingle federated learning
distributed optimization
communication compression
variance reduction
Databases and Information Systems
Theory and Algorithms
ZHAO, Haoyu
BURLACHENKO, Konstantin
LI, Zhize
RICHTARIK, Peter
Faster rates for compressed federated learning with client-variance reduction
description Due to the communication bottleneck in distributed and federated learning applications, algorithms using communication compression have attracted significant attention and are widely used in practice. Moreover, the huge number, high heterogeneity, and limited availability of clients result in high client -variance. This paper addresses these two issues together by proposing compressed and clientvariance reduced methods COFIG and FRECON. We prove an O( (1+\omega)3/2\surdN+ (1+\omega)N2/3 S\epsilon2 S\epsilon2 ) bound on the number of communication rounds of COFIG in the nonconvex setting, where N is the total number of clients, S is the number of clients participating in each round, \epsilon is the convergence error, and \omega is the variance parameter associated with the compression operator. In case of FRECON, \surd we prove an O( (1+\omega) S\epsilon2 ) bound on the number of communication rounds. In the convex setting, N \surd COFIG converges within O((1+\omega) S\epsilon) communication rounds, which, to the best of our knowledge, N is also the first convergence result for compression schemes that do not communicate with all the clients in each round. We stress that neither COFIG nor FRECON needs to communicate with all the clients, and they enjoy the first or faster convergence results for convex and nonconvex federated learning in the regimes considered. Experimental results point to an empirical superiority of COFIG and FRECON over existing baselines.
format text
author ZHAO, Haoyu
BURLACHENKO, Konstantin
LI, Zhize
RICHTARIK, Peter
author_facet ZHAO, Haoyu
BURLACHENKO, Konstantin
LI, Zhize
RICHTARIK, Peter
author_sort ZHAO, Haoyu
title Faster rates for compressed federated learning with client-variance reduction
title_short Faster rates for compressed federated learning with client-variance reduction
title_full Faster rates for compressed federated learning with client-variance reduction
title_fullStr Faster rates for compressed federated learning with client-variance reduction
title_full_unstemmed Faster rates for compressed federated learning with client-variance reduction
title_sort faster rates for compressed federated learning with client-variance reduction
publisher Institutional Knowledge at Singapore Management University
publishDate 2024
url https://ink.library.smu.edu.sg/sis_research/9607
https://ink.library.smu.edu.sg/context/sis_research/article/10607/viewcontent/SIMODS24_cofig_av.pdf
_version_ 1816859159221501952