Fully decoupled neural network learning using delayed gradients
Training neural networks with back-propagation (BP) requires a sequential passing of activations and gradients. This has been recognized as the lockings (i.e., the forward, backward, and update lockings) among modules (each module contains a stack of layers) inherited from the BP. In this paper,...
Saved in:
Main Authors: | , , , |
---|---|
其他作者: | |
格式: | Article |
語言: | English |
出版: |
2024
|
主題: | |
在線閱讀: | https://hdl.handle.net/10356/174476 |
標簽: |
添加標簽
沒有標簽, 成為第一個標記此記錄!
|