Robustness to training disturbances in SpikeProp Learning

Stability is a key issue during spiking neural network training using SpikeProp. The inherent nonlinearity of Spiking Neuron means that the learning manifold changes abruptly; therefore, we need to carefully choose the learning steps at every instance. Other sources of instability are the external d...

全面介紹

Saved in:
書目詳細資料
Main Authors: Shrestha, Sumit Bam, Song, Qing
其他作者: School of Electrical and Electronic Engineering
格式: Article
語言:English
出版: 2020
主題:
在線閱讀:https://hdl.handle.net/10356/139881
標簽: 添加標簽
沒有標簽, 成為第一個標記此記錄!
實物特徵
總結:Stability is a key issue during spiking neural network training using SpikeProp. The inherent nonlinearity of Spiking Neuron means that the learning manifold changes abruptly; therefore, we need to carefully choose the learning steps at every instance. Other sources of instability are the external disturbances that come along with training sample as well as the internal disturbances that arise due to modeling imperfection. The unstable learning scenario can be indirectly observed in the form of surges, which are sudden increases in the learning cost and are a common occurrence during SpikeProp training. Research in the past has shown that proper learning step size is crucial to minimize surges during training process. To determine proper learning step in order to avoid steep learning manifolds, we perform weight convergence analysis of SpikeProp learning in the presence of disturbance signals. The weight convergence analysis is further extended to robust stability analysis linked with overall system error. This ensures boundedness of the total learning error with minimal assumption of bounded disturbance signals. These analyses result in the learning rate normalization scheme, which are the key results of this paper. The performance of learning using this scheme has been compared with the prevailing methods for different benchmark data sets and the results show that this method has stable learning reflected by minimal surges during learning, higher success in training instances, and faster learning as well.