Accumulated decoupled learning with gradient staleness mitigation for convolutional neural networks
Gradient staleness is a major side effect in decoupled learning when training convolutional neural networks asynchronously. Existing methods that ignore this effect might result in reduced generalization and even divergence. In this paper, we propose an accumulated decoupled learning (ADL), wh...
Saved in:
Main Authors: | , , , , , |
---|---|
Other Authors: | |
Format: | Conference or Workshop Item |
Language: | English |
Published: |
2024
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/174480 https://icml.cc/virtual/2021/index.html |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
id |
sg-ntu-dr.10356-174480 |
---|---|
record_format |
dspace |
spelling |
sg-ntu-dr.10356-1744802024-04-05T15:40:28Z Accumulated decoupled learning with gradient staleness mitigation for convolutional neural networks Zhuang, Huiping Weng, Zhenyu Luo, Fulin Toh, Kar-Ann Li, Haizhou Lin, Zhiping School of Electrical and Electronic Engineering 38th International Conference on Machine Learning (ICML 2021) Computer and Information Science Convolutional neural networks Delayed gradients-based methods Gradient staleness is a major side effect in decoupled learning when training convolutional neural networks asynchronously. Existing methods that ignore this effect might result in reduced generalization and even divergence. In this paper, we propose an accumulated decoupled learning (ADL), which includes a module-wise gradient accumulation in order to mitigate the gradient staleness. Unlike prior arts ignoring the gradient staleness, we quantify the staleness in such a way that its mitigation can be quantitatively visualized. As a new learning scheme, the proposed ADL is theoretically shown to converge to critical points in spite of its asynchronism. Extensive experiments on CIFAR-10 and ImageNet datasets are conducted, demonstrating that ADL gives promising generalization results while the state-of-theart methods experience reduced generalization and divergence. In addition, our ADL is shown to have the fastest training speed among the compared methods. The code will be ready soon in https://github.com/ZHUANGHP/Accumulated- Decoupled-Learning.git. Agency for Science, Technology and Research (A*STAR) Published version This work was supported in part by the Science and Engineering Research Council, Agency of Science, Technology and Research, Singapore, through the National Robotics Program under Grant 1922500054. 2024-04-01T08:56:14Z 2024-04-01T08:56:14Z 2021 Conference Paper Zhuang, H., Weng, Z., Luo, F., Toh, K., Li, H. & Lin, Z. (2021). Accumulated decoupled learning with gradient staleness mitigation for convolutional neural networks. 38th International Conference on Machine Learning (ICML 2021), PMLR 139. https://hdl.handle.net/10356/174480 https://icml.cc/virtual/2021/index.html PMLR 139 en NRP-1922500054 © 2022 The authors and PMLR. All rights reserved. This article may be downloaded for personal use only. Any other use requires prior permission of the copyright holder. The Version of Record is available online at https://proceedings.mlr.press/v139/. application/pdf |
institution |
Nanyang Technological University |
building |
NTU Library |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
NTU Library |
collection |
DR-NTU |
language |
English |
topic |
Computer and Information Science Convolutional neural networks Delayed gradients-based methods |
spellingShingle |
Computer and Information Science Convolutional neural networks Delayed gradients-based methods Zhuang, Huiping Weng, Zhenyu Luo, Fulin Toh, Kar-Ann Li, Haizhou Lin, Zhiping Accumulated decoupled learning with gradient staleness mitigation for convolutional neural networks |
description |
Gradient staleness is a major side effect in decoupled
learning when training convolutional neural
networks asynchronously. Existing methods that
ignore this effect might result in reduced generalization
and even divergence. In this paper,
we propose an accumulated decoupled learning
(ADL), which includes a module-wise gradient
accumulation in order to mitigate the gradient
staleness. Unlike prior arts ignoring the gradient
staleness, we quantify the staleness in such a way
that its mitigation can be quantitatively visualized.
As a new learning scheme, the proposed ADL is
theoretically shown to converge to critical points
in spite of its asynchronism. Extensive experiments
on CIFAR-10 and ImageNet datasets are
conducted, demonstrating that ADL gives promising
generalization results while the state-of-theart
methods experience reduced generalization
and divergence. In addition, our ADL is shown to
have the fastest training speed among the compared
methods. The code will be ready soon
in https://github.com/ZHUANGHP/Accumulated-
Decoupled-Learning.git. |
author2 |
School of Electrical and Electronic Engineering |
author_facet |
School of Electrical and Electronic Engineering Zhuang, Huiping Weng, Zhenyu Luo, Fulin Toh, Kar-Ann Li, Haizhou Lin, Zhiping |
format |
Conference or Workshop Item |
author |
Zhuang, Huiping Weng, Zhenyu Luo, Fulin Toh, Kar-Ann Li, Haizhou Lin, Zhiping |
author_sort |
Zhuang, Huiping |
title |
Accumulated decoupled learning with gradient staleness mitigation for convolutional neural networks |
title_short |
Accumulated decoupled learning with gradient staleness mitigation for convolutional neural networks |
title_full |
Accumulated decoupled learning with gradient staleness mitigation for convolutional neural networks |
title_fullStr |
Accumulated decoupled learning with gradient staleness mitigation for convolutional neural networks |
title_full_unstemmed |
Accumulated decoupled learning with gradient staleness mitigation for convolutional neural networks |
title_sort |
accumulated decoupled learning with gradient staleness mitigation for convolutional neural networks |
publishDate |
2024 |
url |
https://hdl.handle.net/10356/174480 https://icml.cc/virtual/2021/index.html |
_version_ |
1814047300698243072 |