Win: Weight-decay-integrated nesterov acceleration for adaptive gradient algorithms
Training deep networks on large-scale datasets is computationally challenging. In this work, we explore the problem of “how to accelerate adaptive gradient algorithms in a general manner”, and aim to provide practical efficiency-boosting insights. To this end, we propose an effective and general Wei...
Saved in:
Main Authors: | ZHOU, Pan, XIE, Xingyu, YAN, Shuicheng |
---|---|
Format: | text |
Language: | English |
Published: |
Institutional Knowledge at Singapore Management University
2023
|
Subjects: | |
Online Access: | https://ink.library.smu.edu.sg/sis_research/9056 https://ink.library.smu.edu.sg/context/sis_research/article/10059/viewcontent/653_win_weight_decay_integrated_ICLR.pdf |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Singapore Management University |
Language: | English |
Similar Items
-
Win: Weight-decay-integrated Nesterov acceleration for faster network training
by: ZHOU, Pan, et al.
Published: (2024) -
Adan: Adaptive Nesterov Momentum Algorithm for faster optimizing deep models
by: XIE, Xingyu, et al.
Published: (2024) -
Network traffic classification based on deep learning
by: Cheng, Li
Published: (2023) -
Audee: Automated testing for deep learning frameworks
by: GUO, Qianyu, et al.
Published: (2020) -
TOWARDS CONCISE REPRESENTATION LEARNING ON DEEP NEURAL NETWORKS
by: YUAN LI
Published: (2021)