Robustness to training disturbances in SpikeProp Learning

Stability is a key issue during spiking neural network training using SpikeProp. The inherent nonlinearity of Spiking Neuron means that the learning manifold changes abruptly; therefore, we need to carefully choose the learning steps at every instance. Other sources of instability are the external d...

Full description

Saved in:
Bibliographic Details
Main Authors: Shrestha, Sumit Bam, Song, Qing
Other Authors: School of Electrical and Electronic Engineering
Format: Article
Language:English
Published: 2020
Subjects:
Online Access:https://hdl.handle.net/10356/139881
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-139881
record_format dspace
spelling sg-ntu-dr.10356-1398812020-05-22T06:05:00Z Robustness to training disturbances in SpikeProp Learning Shrestha, Sumit Bam Song, Qing School of Electrical and Electronic Engineering Engineering::Electrical and electronic engineering Adaptive Learning Rate Error Analysis Stability is a key issue during spiking neural network training using SpikeProp. The inherent nonlinearity of Spiking Neuron means that the learning manifold changes abruptly; therefore, we need to carefully choose the learning steps at every instance. Other sources of instability are the external disturbances that come along with training sample as well as the internal disturbances that arise due to modeling imperfection. The unstable learning scenario can be indirectly observed in the form of surges, which are sudden increases in the learning cost and are a common occurrence during SpikeProp training. Research in the past has shown that proper learning step size is crucial to minimize surges during training process. To determine proper learning step in order to avoid steep learning manifolds, we perform weight convergence analysis of SpikeProp learning in the presence of disturbance signals. The weight convergence analysis is further extended to robust stability analysis linked with overall system error. This ensures boundedness of the total learning error with minimal assumption of bounded disturbance signals. These analyses result in the learning rate normalization scheme, which are the key results of this paper. The performance of learning using this scheme has been compared with the prevailing methods for different benchmark data sets and the results show that this method has stable learning reflected by minimal surges during learning, higher success in training instances, and faster learning as well. 2020-05-22T06:04:59Z 2020-05-22T06:04:59Z 2017 Journal Article Shrestha, S. B., & Song, Q. (2018). Robustness to training disturbances in SpikeProp Learning. IEEE Transactions on Neural Networks and Learning Systems, 29(7), 3126-3139. doi:10.1109/TNNLS.2017.2713125 2162-237X https://hdl.handle.net/10356/139881 10.1109/TNNLS.2017.2713125 28692992 2-s2.0-85023179939 7 29 3126 3139 en IEEE Transactions on Neural Networks and Learning Systems © 2017 IEEE. All rights reserved.
institution Nanyang Technological University
building NTU Library
country Singapore
collection DR-NTU
language English
topic Engineering::Electrical and electronic engineering
Adaptive Learning Rate
Error Analysis
spellingShingle Engineering::Electrical and electronic engineering
Adaptive Learning Rate
Error Analysis
Shrestha, Sumit Bam
Song, Qing
Robustness to training disturbances in SpikeProp Learning
description Stability is a key issue during spiking neural network training using SpikeProp. The inherent nonlinearity of Spiking Neuron means that the learning manifold changes abruptly; therefore, we need to carefully choose the learning steps at every instance. Other sources of instability are the external disturbances that come along with training sample as well as the internal disturbances that arise due to modeling imperfection. The unstable learning scenario can be indirectly observed in the form of surges, which are sudden increases in the learning cost and are a common occurrence during SpikeProp training. Research in the past has shown that proper learning step size is crucial to minimize surges during training process. To determine proper learning step in order to avoid steep learning manifolds, we perform weight convergence analysis of SpikeProp learning in the presence of disturbance signals. The weight convergence analysis is further extended to robust stability analysis linked with overall system error. This ensures boundedness of the total learning error with minimal assumption of bounded disturbance signals. These analyses result in the learning rate normalization scheme, which are the key results of this paper. The performance of learning using this scheme has been compared with the prevailing methods for different benchmark data sets and the results show that this method has stable learning reflected by minimal surges during learning, higher success in training instances, and faster learning as well.
author2 School of Electrical and Electronic Engineering
author_facet School of Electrical and Electronic Engineering
Shrestha, Sumit Bam
Song, Qing
format Article
author Shrestha, Sumit Bam
Song, Qing
author_sort Shrestha, Sumit Bam
title Robustness to training disturbances in SpikeProp Learning
title_short Robustness to training disturbances in SpikeProp Learning
title_full Robustness to training disturbances in SpikeProp Learning
title_fullStr Robustness to training disturbances in SpikeProp Learning
title_full_unstemmed Robustness to training disturbances in SpikeProp Learning
title_sort robustness to training disturbances in spikeprop learning
publishDate 2020
url https://hdl.handle.net/10356/139881
_version_ 1681059791795912704