Accumulated decoupled learning with gradient staleness mitigation for convolutional neural networks
Gradient staleness is a major side effect in decoupled learning when training convolutional neural networks asynchronously. Existing methods that ignore this effect might result in reduced generalization and even divergence. In this paper, we propose an accumulated decoupled learning (ADL), wh...
Saved in:
Main Authors: | Zhuang, Huiping, Weng, Zhenyu, Luo, Fulin, Toh, Kar-Ann, Li, Haizhou, Lin, Zhiping |
---|---|
Other Authors: | School of Electrical and Electronic Engineering |
Format: | Conference or Workshop Item |
Language: | English |
Published: |
2024
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/174480 https://icml.cc/virtual/2021/index.html |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Similar Items
-
Fully decoupled neural network learning using delayed gradients
by: Zhuang, Huiping, et al.
Published: (2024) -
Gradient boosted graph convolutional network on heterophilic graph
by: Seah, Ming Yang
Published: (2024) -
Attention multihop graph and multiscale convolutional fusion network for hyperspectral image classification
by: Zhou, Hao, et al.
Published: (2023) -
Decoupled neural network training with re-computation and weight prediction
by: Peng, Jiawei, et al.
Published: (2023) -
Convolutional Networks for Voting-based Anomaly Classification in Metal Surface Inspection
by: Natarajan, Vidhya, et al.
Published: (2017)