Accumulated decoupled learning with gradient staleness mitigation for convolutional neural networks
Gradient staleness is a major side effect in decoupled learning when training convolutional neural networks asynchronously. Existing methods that ignore this effect might result in reduced generalization and even divergence. In this paper, we propose an accumulated decoupled learning (ADL), wh...
Saved in:
Main Authors: | Zhuang, Huiping, Weng, Zhenyu, Luo, Fulin, Toh, Kar-Ann, Li, Haizhou, Lin, Zhiping |
---|---|
其他作者: | School of Electrical and Electronic Engineering |
格式: | Conference or Workshop Item |
語言: | English |
出版: |
2024
|
主題: | |
在線閱讀: | https://hdl.handle.net/10356/174480 https://icml.cc/virtual/2021/index.html |
標簽: |
添加標簽
沒有標簽, 成為第一個標記此記錄!
|
機構: | Nanyang Technological University |
語言: | English |
相似書籍
-
Fully decoupled neural network learning using delayed gradients
由: Zhuang, Huiping, et al.
出版: (2024) -
Gradient boosted graph convolutional network on heterophilic graph
由: Seah, Ming Yang
出版: (2024) -
Attention multihop graph and multiscale convolutional fusion network for hyperspectral image classification
由: Zhou, Hao, et al.
出版: (2023) -
Decoupled neural network training with re-computation and weight prediction
由: Peng, Jiawei, et al.
出版: (2023) -
Convolutional Networks for Voting-based Anomaly Classification in Metal Surface Inspection
由: Natarajan, Vidhya, et al.
出版: (2017)