MARINA: Faster non-convex distributed learning with compression
We develop and analyze MARINA: a new communication efficient method for non-convex distributed learning over heterogeneous datasets. MARINA employs a novel communication compression strategy based on the compression of gradient differences that is reminiscent of but different from the strategy emplo...
Saved in:
Main Authors: | GORBUNOV, Eduard, BURLACHENKO, Konstantin, LI, Zhize, RICHTARIK, Peter |
---|---|
Format: | text |
Language: | English |
Published: |
Institutional Knowledge at Singapore Management University
2021
|
Subjects: | |
Online Access: | https://ink.library.smu.edu.sg/sis_research/8682 https://ink.library.smu.edu.sg/context/sis_research/article/9685/viewcontent/ICML21_full_marina.pdf |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Singapore Management University |
Language: | English |
Similar Items
-
CANITA: Faster rates for distributed convex optimization with communication compression
by: LI, Zhize, et al.
Published: (2021) -
Faster rates for compressed federated learning with client-variance reduction
by: ZHAO, Haoyu, et al.
Published: (2024) -
Acceleration for compressed gradient descent in distributed and federated optimization
by: LI, Zhize, et al.
Published: (2020) -
3PC: Three point compressors for communication-efficient distributed training and a better theory for lazy aggregation
by: RICHTARIK, Peter, et al.
Published: (2022) -
BEER: Fast O(1/T) rate for decentralized nonconvex optimization with communication compression
by: ZHAO, Haoyu, et al.
Published: (2022)