CANITA: Faster rates for distributed convex optimization with communication compression
Due to the high communication cost in distributed and federated learning, methods relying on compressed communication are becoming increasingly popular. Besides, the best theoretically and practically performing gradient-type methods invariably rely on some form of acceleration/momentum to reduce th...
Saved in:
Main Authors: | LI, Zhize, RICHTARIK, Peter |
---|---|
Format: | text |
Language: | English |
Published: |
Institutional Knowledge at Singapore Management University
2021
|
Subjects: | |
Online Access: | https://ink.library.smu.edu.sg/sis_research/8684 https://ink.library.smu.edu.sg/context/sis_research/article/9687/viewcontent/NeurIPS21_full_canita.pdf |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Singapore Management University |
Language: | English |
Similar Items
-
MARINA: Faster non-convex distributed learning with compression
by: GORBUNOV, Eduard, et al.
Published: (2021) -
BEER: Fast O(1/T) rate for decentralized nonconvex optimization with communication compression
by: ZHAO, Haoyu, et al.
Published: (2022) -
Acceleration for compressed gradient descent in distributed and federated optimization
by: LI, Zhize, et al.
Published: (2020) -
PAGE: A simple and optimal probabilistic gradient estimator for nonconvex optimization
by: LI, Zhize, et al.
Published: (2021) -
A unified variance-reduced accelerated gradient method for convex optimization
by: LAN, Guanghui, et al.
Published: (2019)