CANITA: Faster rates for distributed convex optimization with communication compression

Due to the high communication cost in distributed and federated learning, methods relying on compressed communication are becoming increasingly popular. Besides, the best theoretically and practically performing gradient-type methods invariably rely on some form of acceleration/momentum to reduce th...

Full description

Saved in:
Bibliographic Details
Main Authors: LI, Zhize, RICHTARIK, Peter
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2021
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/8684
https://ink.library.smu.edu.sg/context/sis_research/article/9687/viewcontent/NeurIPS21_full_canita.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English

Similar Items