Evolving spiking neural networks for pattern classification problems

This thesis focuses on the development of new batch/online learning algorithms for evolving spiking neural networks that can be used for pattern classification problems. The input and output signals of spiking neurons consist of discrete events (spikes) in time. The inherent discontinuous nature of...

Full description

Saved in:
Bibliographic Details
Main Author: Shirin Dora
Other Authors: Sundaram Suresh
Format: Theses and Dissertations
Language:English
Published: 2017
Subjects:
Online Access:http://hdl.handle.net/10356/69608
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-69608
record_format dspace
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic DRNTU::Engineering::Computer science and engineering::Information systems
spellingShingle DRNTU::Engineering::Computer science and engineering::Information systems
Shirin Dora
Evolving spiking neural networks for pattern classification problems
description This thesis focuses on the development of new batch/online learning algorithms for evolving spiking neural networks that can be used for pattern classification problems. The input and output signals of spiking neurons consist of discrete events (spikes) in time. The inherent discontinuous nature of spikes is an issue in developing learning algorithms for spiking neurons. This has inspired many researchers to study plasticity mechanisms observed in the brain to develop efficient learning techniques for spiking neurons. Spike Timing Dependent Plasticity (STDP) is one of the most studied biological plasticity mechanism. STDP relies only on the locally available information to update the weights of a given synapse. The local nature of STDP may result in an unbalanced distribution of weights and lead to convergence issues. Similar to the previous generations of neural networks, selecting an appropriate spiking neural network architecture for approximating the relationship between a set of input and output spike patterns is a challenging problem. To address this issue, rank order learning based evolving Spiking Neural Network (eSNN) has been proposed. Rank order learning takes into account the global information, but it ignores the precise time of spikes. Further, eSNN uses a two layered network to approximate the decision boundary, which may require a higher number of neurons to approximate the relationship between the input and output spike patterns. This thesis is directed towards three main problems in the development of learning algorithms for spiking neural networks, viz. utilizing both local and global information, evolving the network architecture and learning in an online framework, specifically for pattern classification problems. The first contribution of this work is the development of a Self-Regulating Evolving Spiking Neural (SRESN) classifier with a two layered network. The SRESN classifier operates in a batch learning framework and uses heuristic learning strategies to evolve the network architecture and simultaneously update the synaptic weights. Depending on the information present in a sample with respect to the knowledge stored in the network, it chooses to either add a neuron or update the network parameters or skip learning a sample. The SRESN classifier uses rank order learning in a feature-wise manner for initializing the weights of a newly added neuron and for updating the weights of existing neurons. This helps the SRESN classifier in achieving better generalization performance and faster convergence. The second contribution of this work is a Two-stage Margin Maximization Spiking Neural Network (TMM-SNN) that employs a three layered SNN. The learning algorithm of TMM-SNN has two stages, namely, structure learning stage and output weights learning stage. In the first stage (structure learning stage), the learning algorithm evolves and updates weights of the hidden neurons. In this stage, the learning algorithm evolves the hidden layer completely in the first epoch and updates the weights of hidden neurons using margin maximization based update rule for multiple epochs. In the first epoch, a new neuron is added such that the newly added neuron spikes at a specific time. For this purpose, an activation based coding scheme is developed which uses the locally available information to initialize the weights of a new neuron. At the end of the first stage, the learning algorithm fixes the synaptic weights and thresholds of hidden neurons. In the second stage (output weights learning stage), the learning algorithm updates the output neuron weights, such that the temporal separation between the spike times of the interclass and intraclass neurons is maximized. The performance of TMM-SNN is statistically compared with the other existing learning algorithms for SNNs on benchmark problems. The performance results of different algorithms are compared using training/testing accuracy, number of epochs and number of network parameters. The results of the performance evaluation clearly indicate that TMM-SNN can achieve better performance using fewer epochs. The local update strategies in TMM-SNN do not take into account the global information stored in the network while updating the weights of a given synapse. As a result, TMM-SNN requires multiple presentations of the training spike patterns to closely approximate the relationship between input spike patterns and the corresponding class labels. The third contribution of this work is the development of a new concept of meta-neuron for a two layered SNN for learning in a single presentation (online learning) of the input spike patterns. The concept of meta-neuron is inspired by the role of astrocytes in modulating synaptic plasticity in the brain. Astrocytes can connect to multiple synapses simultaneously, intercept the activities on the connected synapses and modulate the plasticity of these synapses. This form of heterosynaptic plasticity allows consideration of both global information stored in the network along with the local information present in the input spike patterns to update the weights of a given synapse. The meta-neuron can intercept the activities of the presynaptic neurons and can access the weights of existing synapses in the network. This allows the meta-neuron to utilize the locally available information in the activity of a presynaptic neuron and the globally available information in the form of synaptic weights. A meta-neuron based learning rule is developed that utilizes both the local and global information to produce precise shifts in the spike times of the postsynaptic neurons. The capability of the meta-neuron based learning rule to produce precise shifts in the spike times of the postsynaptic neuron renders it suitable for use in an online learning framework. To demonstrate this capability, an Online Meta-neuron based Learning Algorithm (OMLA) is developed that evolves the network architecture and updates the synaptic weights of neurons in the network. The performance of OMLA is statistically compared with the other existing online as well as batch learning algorithms for spiking neural networks. Performance comparison results clearly indicate that OMLA performs better than other existing learning algorithms for spiking neural networks. To study the suitability of OMLA for real applications, this thesis also presents a possible neuromorphic implementation of OMLA using a Field Programmable Gate Array (FPGA). The purpose of this study is to examine the implementation of the newly developed OMLA from a feasibility perspective. The digital implementation of OMLA employs spiking neurons modeled using the spike response function in the hardware. The spike response function is simulated in the hardware using the COordinate Rotation DIgital Computer (CORDIC) circuit. The performance of the neuromorphic implementation of OMLA has been evaluated on several benchmark data sets. The results of performance evaluation clearly indicate that the neuromorphic implementation closely emulates the software based simulations. A more rigorous study to develop a neuromorphic device optimized for hardware is a topic for future work.
author2 Sundaram Suresh
author_facet Sundaram Suresh
Shirin Dora
format Theses and Dissertations
author Shirin Dora
author_sort Shirin Dora
title Evolving spiking neural networks for pattern classification problems
title_short Evolving spiking neural networks for pattern classification problems
title_full Evolving spiking neural networks for pattern classification problems
title_fullStr Evolving spiking neural networks for pattern classification problems
title_full_unstemmed Evolving spiking neural networks for pattern classification problems
title_sort evolving spiking neural networks for pattern classification problems
publishDate 2017
url http://hdl.handle.net/10356/69608
_version_ 1759857296141189120
spelling sg-ntu-dr.10356-696082023-03-04T00:53:05Z Evolving spiking neural networks for pattern classification problems Shirin Dora Sundaram Suresh School of Computer Science and Engineering Centre for Computational Intelligence DRNTU::Engineering::Computer science and engineering::Information systems This thesis focuses on the development of new batch/online learning algorithms for evolving spiking neural networks that can be used for pattern classification problems. The input and output signals of spiking neurons consist of discrete events (spikes) in time. The inherent discontinuous nature of spikes is an issue in developing learning algorithms for spiking neurons. This has inspired many researchers to study plasticity mechanisms observed in the brain to develop efficient learning techniques for spiking neurons. Spike Timing Dependent Plasticity (STDP) is one of the most studied biological plasticity mechanism. STDP relies only on the locally available information to update the weights of a given synapse. The local nature of STDP may result in an unbalanced distribution of weights and lead to convergence issues. Similar to the previous generations of neural networks, selecting an appropriate spiking neural network architecture for approximating the relationship between a set of input and output spike patterns is a challenging problem. To address this issue, rank order learning based evolving Spiking Neural Network (eSNN) has been proposed. Rank order learning takes into account the global information, but it ignores the precise time of spikes. Further, eSNN uses a two layered network to approximate the decision boundary, which may require a higher number of neurons to approximate the relationship between the input and output spike patterns. This thesis is directed towards three main problems in the development of learning algorithms for spiking neural networks, viz. utilizing both local and global information, evolving the network architecture and learning in an online framework, specifically for pattern classification problems. The first contribution of this work is the development of a Self-Regulating Evolving Spiking Neural (SRESN) classifier with a two layered network. The SRESN classifier operates in a batch learning framework and uses heuristic learning strategies to evolve the network architecture and simultaneously update the synaptic weights. Depending on the information present in a sample with respect to the knowledge stored in the network, it chooses to either add a neuron or update the network parameters or skip learning a sample. The SRESN classifier uses rank order learning in a feature-wise manner for initializing the weights of a newly added neuron and for updating the weights of existing neurons. This helps the SRESN classifier in achieving better generalization performance and faster convergence. The second contribution of this work is a Two-stage Margin Maximization Spiking Neural Network (TMM-SNN) that employs a three layered SNN. The learning algorithm of TMM-SNN has two stages, namely, structure learning stage and output weights learning stage. In the first stage (structure learning stage), the learning algorithm evolves and updates weights of the hidden neurons. In this stage, the learning algorithm evolves the hidden layer completely in the first epoch and updates the weights of hidden neurons using margin maximization based update rule for multiple epochs. In the first epoch, a new neuron is added such that the newly added neuron spikes at a specific time. For this purpose, an activation based coding scheme is developed which uses the locally available information to initialize the weights of a new neuron. At the end of the first stage, the learning algorithm fixes the synaptic weights and thresholds of hidden neurons. In the second stage (output weights learning stage), the learning algorithm updates the output neuron weights, such that the temporal separation between the spike times of the interclass and intraclass neurons is maximized. The performance of TMM-SNN is statistically compared with the other existing learning algorithms for SNNs on benchmark problems. The performance results of different algorithms are compared using training/testing accuracy, number of epochs and number of network parameters. The results of the performance evaluation clearly indicate that TMM-SNN can achieve better performance using fewer epochs. The local update strategies in TMM-SNN do not take into account the global information stored in the network while updating the weights of a given synapse. As a result, TMM-SNN requires multiple presentations of the training spike patterns to closely approximate the relationship between input spike patterns and the corresponding class labels. The third contribution of this work is the development of a new concept of meta-neuron for a two layered SNN for learning in a single presentation (online learning) of the input spike patterns. The concept of meta-neuron is inspired by the role of astrocytes in modulating synaptic plasticity in the brain. Astrocytes can connect to multiple synapses simultaneously, intercept the activities on the connected synapses and modulate the plasticity of these synapses. This form of heterosynaptic plasticity allows consideration of both global information stored in the network along with the local information present in the input spike patterns to update the weights of a given synapse. The meta-neuron can intercept the activities of the presynaptic neurons and can access the weights of existing synapses in the network. This allows the meta-neuron to utilize the locally available information in the activity of a presynaptic neuron and the globally available information in the form of synaptic weights. A meta-neuron based learning rule is developed that utilizes both the local and global information to produce precise shifts in the spike times of the postsynaptic neurons. The capability of the meta-neuron based learning rule to produce precise shifts in the spike times of the postsynaptic neuron renders it suitable for use in an online learning framework. To demonstrate this capability, an Online Meta-neuron based Learning Algorithm (OMLA) is developed that evolves the network architecture and updates the synaptic weights of neurons in the network. The performance of OMLA is statistically compared with the other existing online as well as batch learning algorithms for spiking neural networks. Performance comparison results clearly indicate that OMLA performs better than other existing learning algorithms for spiking neural networks. To study the suitability of OMLA for real applications, this thesis also presents a possible neuromorphic implementation of OMLA using a Field Programmable Gate Array (FPGA). The purpose of this study is to examine the implementation of the newly developed OMLA from a feasibility perspective. The digital implementation of OMLA employs spiking neurons modeled using the spike response function in the hardware. The spike response function is simulated in the hardware using the COordinate Rotation DIgital Computer (CORDIC) circuit. The performance of the neuromorphic implementation of OMLA has been evaluated on several benchmark data sets. The results of performance evaluation clearly indicate that the neuromorphic implementation closely emulates the software based simulations. A more rigorous study to develop a neuromorphic device optimized for hardware is a topic for future work. Doctor of Philosophy (SCE) 2017-03-02T03:35:17Z 2017-03-02T03:35:17Z 2017 Thesis Shirin Dora. (2017). Evolving spiking neural networks for pattern classification problems. Doctoral thesis, Nanyang Technological University, Singapore. http://hdl.handle.net/10356/69608 10.32657/10356/69608 en 173 p. application/pdf