Faster rates for compressed federated learning with client-variance reduction
Due to the communication bottleneck in distributed and federated learning applications, algorithms using communication compression have attracted significant attention and are widely used in practice. Moreover, the huge number, high heterogeneity, and limited availability of clients result in high c...
محفوظ في:
المؤلفون الرئيسيون: | ZHAO, Haoyu, BURLACHENKO, Konstantin, LI, Zhize, RICHTARIK, Peter |
---|---|
التنسيق: | text |
اللغة: | English |
منشور في: |
Institutional Knowledge at Singapore Management University
2024
|
الموضوعات: | |
الوصول للمادة أونلاين: | https://ink.library.smu.edu.sg/sis_research/9607 https://ink.library.smu.edu.sg/context/sis_research/article/10607/viewcontent/SIMODS24_cofig_av.pdf |
الوسوم: |
إضافة وسم
لا توجد وسوم, كن أول من يضع وسما على هذه التسجيلة!
|
المؤسسة: | Singapore Management University |
اللغة: | English |
مواد مشابهة
-
MARINA: Faster non-convex distributed learning with compression
بواسطة: GORBUNOV, Eduard, وآخرون
منشور في: (2021) -
Escaping saddle points in heterogeneous federated learning via distributed SGD with communication compression
بواسطة: CHEN, Sijin, وآخرون
منشور في: (2024) -
Stochastic gradient Hamiltonian Monte Carlo with variance reduction for Bayesian inference
بواسطة: LI, Zhize, وآخرون
منشور في: (2019) -
Reduction of microwave amplifier gain variance
بواسطة: Eccleston, K.W., وآخرون
منشور في: (2014) -
Simple and optimal stochastic gradient methods for nonsmooth nonconvex optimization
بواسطة: LI, Zhize, وآخرون
منشور في: (2022)