Adan: Adaptive Nesterov Momentum Algorithm for faster optimizing deep models

In deep learning, different kinds of deep networks typically need different optimizers, which have to be chosen after multiple trials, making the training process inefficient. To relieve this issue and consistently improve the model training speed across deep networks, we propose the ADAptive Nester...

Full description

Saved in:
Bibliographic Details
Main Authors: XIE, Xingyu, ZHOU, Pan, LI, Huan, LIN, Zhouchen, YAN, Shuicheng
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2024
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/9037
https://ink.library.smu.edu.sg/context/sis_research/article/10040/viewcontent/ADAN_sv.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
id sg-smu-ink.sis_research-10040
record_format dspace
spelling sg-smu-ink.sis_research-100402024-07-25T07:56:50Z Adan: Adaptive Nesterov Momentum Algorithm for faster optimizing deep models XIE, Xingyu ZHOU, Pan LI, Huan LIN, Zhouchen YAN, Shuicheng In deep learning, different kinds of deep networks typically need different optimizers, which have to be chosen after multiple trials, making the training process inefficient. To relieve this issue and consistently improve the model training speed across deep networks, we propose the ADAptive Nesterov momentum algorithm, Adan for short. Adan first reformulates the vanilla Nesterov acceleration to develop a new Nesterov momentum estimation (NME) method, which avoids the extra overhead of computing gradient at the extrapolation point. Then Adan adopts NME to estimate the gradient's first- and second-order moments in adaptive gradient algorithms for convergence acceleration. Besides, we prove that Adan finds an ϵ -approximate first-order stationary point within O(ϵ−3.5) stochastic gradient complexity on the non-convex stochastic problems (e.g.deep learning problems), matching the best-known lower bound. Extensive experimental results show that Adan consistently surpasses the corresponding SoTA optimizers on vision, language, and RL tasks and sets new SoTAs for many popular networks and frameworks, eg ResNet, ConvNext, ViT, Swin, MAE, DETR, GPT-2, Transformer-XL, and BERT. More surprisingly, Adan can use half of the training cost (epochs) of SoTA optimizers to achieve higher or comparable performance on ViT, GPT-2, MAE, etc, and also shows great tolerance to a large range of minibatch size, e.g.from 1k to 32k. Code is released at https://github.com/sail-sg/Adan , and has been used in multiple popular deep learning frameworks or projects. 2024-07-01T07:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/9037 info:doi/10.1109/TPAMI.2024.3423382 https://ink.library.smu.edu.sg/context/sis_research/article/10040/viewcontent/ADAN_sv.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Adaptive optimizer Complexity theory Computer architecture Convergence Deep learning DNN optimizer Fast DNN training Stochastic processes Task analysis Training OS and Networks Theory and Algorithms
institution Singapore Management University
building SMU Libraries
continent Asia
country Singapore
Singapore
content_provider SMU Libraries
collection InK@SMU
language English
topic Adaptive optimizer
Complexity theory
Computer architecture
Convergence
Deep learning
DNN optimizer
Fast DNN training
Stochastic processes
Task analysis
Training
OS and Networks
Theory and Algorithms
spellingShingle Adaptive optimizer
Complexity theory
Computer architecture
Convergence
Deep learning
DNN optimizer
Fast DNN training
Stochastic processes
Task analysis
Training
OS and Networks
Theory and Algorithms
XIE, Xingyu
ZHOU, Pan
LI, Huan
LIN, Zhouchen
YAN, Shuicheng
Adan: Adaptive Nesterov Momentum Algorithm for faster optimizing deep models
description In deep learning, different kinds of deep networks typically need different optimizers, which have to be chosen after multiple trials, making the training process inefficient. To relieve this issue and consistently improve the model training speed across deep networks, we propose the ADAptive Nesterov momentum algorithm, Adan for short. Adan first reformulates the vanilla Nesterov acceleration to develop a new Nesterov momentum estimation (NME) method, which avoids the extra overhead of computing gradient at the extrapolation point. Then Adan adopts NME to estimate the gradient's first- and second-order moments in adaptive gradient algorithms for convergence acceleration. Besides, we prove that Adan finds an ϵ -approximate first-order stationary point within O(ϵ−3.5) stochastic gradient complexity on the non-convex stochastic problems (e.g.deep learning problems), matching the best-known lower bound. Extensive experimental results show that Adan consistently surpasses the corresponding SoTA optimizers on vision, language, and RL tasks and sets new SoTAs for many popular networks and frameworks, eg ResNet, ConvNext, ViT, Swin, MAE, DETR, GPT-2, Transformer-XL, and BERT. More surprisingly, Adan can use half of the training cost (epochs) of SoTA optimizers to achieve higher or comparable performance on ViT, GPT-2, MAE, etc, and also shows great tolerance to a large range of minibatch size, e.g.from 1k to 32k. Code is released at https://github.com/sail-sg/Adan , and has been used in multiple popular deep learning frameworks or projects.
format text
author XIE, Xingyu
ZHOU, Pan
LI, Huan
LIN, Zhouchen
YAN, Shuicheng
author_facet XIE, Xingyu
ZHOU, Pan
LI, Huan
LIN, Zhouchen
YAN, Shuicheng
author_sort XIE, Xingyu
title Adan: Adaptive Nesterov Momentum Algorithm for faster optimizing deep models
title_short Adan: Adaptive Nesterov Momentum Algorithm for faster optimizing deep models
title_full Adan: Adaptive Nesterov Momentum Algorithm for faster optimizing deep models
title_fullStr Adan: Adaptive Nesterov Momentum Algorithm for faster optimizing deep models
title_full_unstemmed Adan: Adaptive Nesterov Momentum Algorithm for faster optimizing deep models
title_sort adan: adaptive nesterov momentum algorithm for faster optimizing deep models
publisher Institutional Knowledge at Singapore Management University
publishDate 2024
url https://ink.library.smu.edu.sg/sis_research/9037
https://ink.library.smu.edu.sg/context/sis_research/article/10040/viewcontent/ADAN_sv.pdf
_version_ 1814047714305900544