MARINA: Faster non-convex distributed learning with compression

We develop and analyze MARINA: a new communication efficient method for non-convex distributed learning over heterogeneous datasets. MARINA employs a novel communication compression strategy based on the compression of gradient differences that is reminiscent of but different from the strategy emplo...

Full description

Saved in:
Bibliographic Details
Main Authors: GORBUNOV, Eduard, BURLACHENKO, Konstantin, LI, Zhize, RICHTARIK, Peter
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2021
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/8682
https://ink.library.smu.edu.sg/context/sis_research/article/9685/viewcontent/ICML21_full_marina.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
id sg-smu-ink.sis_research-9685
record_format dspace
spelling sg-smu-ink.sis_research-96852024-03-28T09:04:50Z MARINA: Faster non-convex distributed learning with compression GORBUNOV, Eduard BURLACHENKO, Konstantin LI, Zhize RICHTARIK, Peter We develop and analyze MARINA: a new communication efficient method for non-convex distributed learning over heterogeneous datasets. MARINA employs a novel communication compression strategy based on the compression of gradient differences that is reminiscent of but different from the strategy employed in the DIANA method of Mishchenko et al. (2019). Unlike virtually all competing distributed first-order methods, including DIANA, ours is based on a carefully designed biased gradient estimator, which is the key to its superior theoretical and practical performance. The communication complexity bounds we prove for MARINA are evidently better than those of all previous first-order methods. Further, we develop and analyze two variants of MARINA: VR-MARINA and PP-MARINA. The first method is designed for the case when the local loss functions owned by clients are either of a finite sum or of an expectation form, and the second method allows for a partial participation of clients -- a feature important in federated learning. All our methods are superior to previous state-of-the-art methods in terms of oracle/communication complexity. Finally, we provide a convergence analysis of all methods for problems satisfying the Polyak-Lojasiewicz condition. 2021-07-01T07:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/8682 https://ink.library.smu.edu.sg/context/sis_research/article/9685/viewcontent/ICML21_full_marina.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Databases and Information Systems
institution Singapore Management University
building SMU Libraries
continent Asia
country Singapore
Singapore
content_provider SMU Libraries
collection InK@SMU
language English
topic Databases and Information Systems
spellingShingle Databases and Information Systems
GORBUNOV, Eduard
BURLACHENKO, Konstantin
LI, Zhize
RICHTARIK, Peter
MARINA: Faster non-convex distributed learning with compression
description We develop and analyze MARINA: a new communication efficient method for non-convex distributed learning over heterogeneous datasets. MARINA employs a novel communication compression strategy based on the compression of gradient differences that is reminiscent of but different from the strategy employed in the DIANA method of Mishchenko et al. (2019). Unlike virtually all competing distributed first-order methods, including DIANA, ours is based on a carefully designed biased gradient estimator, which is the key to its superior theoretical and practical performance. The communication complexity bounds we prove for MARINA are evidently better than those of all previous first-order methods. Further, we develop and analyze two variants of MARINA: VR-MARINA and PP-MARINA. The first method is designed for the case when the local loss functions owned by clients are either of a finite sum or of an expectation form, and the second method allows for a partial participation of clients -- a feature important in federated learning. All our methods are superior to previous state-of-the-art methods in terms of oracle/communication complexity. Finally, we provide a convergence analysis of all methods for problems satisfying the Polyak-Lojasiewicz condition.
format text
author GORBUNOV, Eduard
BURLACHENKO, Konstantin
LI, Zhize
RICHTARIK, Peter
author_facet GORBUNOV, Eduard
BURLACHENKO, Konstantin
LI, Zhize
RICHTARIK, Peter
author_sort GORBUNOV, Eduard
title MARINA: Faster non-convex distributed learning with compression
title_short MARINA: Faster non-convex distributed learning with compression
title_full MARINA: Faster non-convex distributed learning with compression
title_fullStr MARINA: Faster non-convex distributed learning with compression
title_full_unstemmed MARINA: Faster non-convex distributed learning with compression
title_sort marina: faster non-convex distributed learning with compression
publisher Institutional Knowledge at Singapore Management University
publishDate 2021
url https://ink.library.smu.edu.sg/sis_research/8682
https://ink.library.smu.edu.sg/context/sis_research/article/9685/viewcontent/ICML21_full_marina.pdf
_version_ 1795302170802782208