A unified variance-reduced accelerated gradient method for convex optimization

We propose a novel randomized incremental gradient algorithm, namely, VAriance-Reduced Accelerated Gradient (Varag), for finite-sum optimization. Equipped with a unified step-size policy that adjusts itself to the value of the conditional number, Varag exhibits the unified optimal rates of convergen...

Full description

Saved in:
Bibliographic Details
Main Authors: LAN, Guanghui, LI, Zhize, ZHOU, Yi
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2019
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/8678
https://ink.library.smu.edu.sg/context/sis_research/article/9681/viewcontent/NeurIPS_2019_a_unified_variance_reduced_accelerated_gradient_method_for_convex_optimization_Paper.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
id sg-smu-ink.sis_research-9681
record_format dspace
spelling sg-smu-ink.sis_research-96812024-03-28T09:06:42Z A unified variance-reduced accelerated gradient method for convex optimization LAN, Guanghui LI, Zhize ZHOU, Yi We propose a novel randomized incremental gradient algorithm, namely, VAriance-Reduced Accelerated Gradient (Varag), for finite-sum optimization. Equipped with a unified step-size policy that adjusts itself to the value of the conditional number, Varag exhibits the unified optimal rates of convergence for solving smooth convex finite-sum problems directly regardless of their strong convexity. Moreover, Varag is the first accelerated randomized incremental gradient method that benefits from the strong convexity of the data-fidelity term to achieve the optimal linear convergence. It also establishes an optimal linear rate of convergence for solving a wide class of problems only satisfying a certain error bound condition rather than strong convexity. Varag can also be extended to solve stochastic finite-sum problems. 2019-12-01T08:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/8678 https://ink.library.smu.edu.sg/context/sis_research/article/9681/viewcontent/NeurIPS_2019_a_unified_variance_reduced_accelerated_gradient_method_for_convex_optimization_Paper.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Databases and Information Systems
institution Singapore Management University
building SMU Libraries
continent Asia
country Singapore
Singapore
content_provider SMU Libraries
collection InK@SMU
language English
topic Databases and Information Systems
spellingShingle Databases and Information Systems
LAN, Guanghui
LI, Zhize
ZHOU, Yi
A unified variance-reduced accelerated gradient method for convex optimization
description We propose a novel randomized incremental gradient algorithm, namely, VAriance-Reduced Accelerated Gradient (Varag), for finite-sum optimization. Equipped with a unified step-size policy that adjusts itself to the value of the conditional number, Varag exhibits the unified optimal rates of convergence for solving smooth convex finite-sum problems directly regardless of their strong convexity. Moreover, Varag is the first accelerated randomized incremental gradient method that benefits from the strong convexity of the data-fidelity term to achieve the optimal linear convergence. It also establishes an optimal linear rate of convergence for solving a wide class of problems only satisfying a certain error bound condition rather than strong convexity. Varag can also be extended to solve stochastic finite-sum problems.
format text
author LAN, Guanghui
LI, Zhize
ZHOU, Yi
author_facet LAN, Guanghui
LI, Zhize
ZHOU, Yi
author_sort LAN, Guanghui
title A unified variance-reduced accelerated gradient method for convex optimization
title_short A unified variance-reduced accelerated gradient method for convex optimization
title_full A unified variance-reduced accelerated gradient method for convex optimization
title_fullStr A unified variance-reduced accelerated gradient method for convex optimization
title_full_unstemmed A unified variance-reduced accelerated gradient method for convex optimization
title_sort unified variance-reduced accelerated gradient method for convex optimization
publisher Institutional Knowledge at Singapore Management University
publishDate 2019
url https://ink.library.smu.edu.sg/sis_research/8678
https://ink.library.smu.edu.sg/context/sis_research/article/9681/viewcontent/NeurIPS_2019_a_unified_variance_reduced_accelerated_gradient_method_for_convex_optimization_Paper.pdf
_version_ 1795302170043613184