Acceleration for compressed gradient descent in distributed and federated optimization
Due to the high communication cost in distributed and federated learning problems, methods relying on compression of communicated messages are becoming increasingly popular. While in other contexts the best performing gradient-type methods invariably rely on some form of acceleration/momentum to red...
Saved in:
Main Authors: | LI, Zhize, KOVALEV, Dmitry, QIAN, Xun, RICHTARIK, Peter |
---|---|
Format: | text |
Language: | English |
Published: |
Institutional Knowledge at Singapore Management University
2020
|
Subjects: | |
Online Access: | https://ink.library.smu.edu.sg/sis_research/8681 https://ink.library.smu.edu.sg/context/sis_research/article/9684/viewcontent/ICML20_full_adiana.pdf |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Singapore Management University |
Language: | English |
Similar Items
-
CANITA: Faster rates for distributed convex optimization with communication compression
by: LI, Zhize, et al.
Published: (2021) -
SSRGD: Simple Stochastic Recursive Gradient Descent for escaping saddle points
by: LI, Zhize
Published: (2019) -
PAGE: A simple and optimal probabilistic gradient estimator for nonconvex optimization
by: LI, Zhize, et al.
Published: (2021) -
MARINA: Faster non-convex distributed learning with compression
by: GORBUNOV, Eduard, et al.
Published: (2021) -
A unified variance-reduced accelerated gradient method for convex optimization
by: LAN, Guanghui, et al.
Published: (2019)