Accumulated decoupled learning with gradient staleness mitigation for convolutional neural networks

Gradient staleness is a major side effect in decoupled learning when training convolutional neural networks asynchronously. Existing methods that ignore this effect might result in reduced generalization and even divergence. In this paper, we propose an accumulated decoupled learning (ADL), wh...

全面介紹

Saved in:
書目詳細資料
Main Authors: Zhuang, Huiping, Weng, Zhenyu, Luo, Fulin, Toh, Kar-Ann, Li, Haizhou, Lin, Zhiping
其他作者: School of Electrical and Electronic Engineering
格式: Conference or Workshop Item
語言:English
出版: 2024
主題:
在線閱讀:https://hdl.handle.net/10356/174480
https://icml.cc/virtual/2021/index.html
標簽: 添加標簽
沒有標簽, 成為第一個標記此記錄!
機構: Nanyang Technological University
語言: English