Hardware efficient, neuromorphic dendritically enhanced readout for liquid state machines
In this article, we describe a new neuro-inspired, hardware-friendly readout stage for the liquid state machine (LSM) that is suitable for on-sensor computing in resource constrained applications. Compared to the state of the art parallel perceptron readout (PPR), our readout architecture and learni...
Saved in:
Main Authors: | , , |
---|---|
Other Authors: | |
Format: | Conference or Workshop Item |
Language: | English |
Published: |
2014
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/99863 http://hdl.handle.net/10220/19535 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
id |
sg-ntu-dr.10356-99863 |
---|---|
record_format |
dspace |
spelling |
sg-ntu-dr.10356-998632020-03-07T13:24:49Z Hardware efficient, neuromorphic dendritically enhanced readout for liquid state machines Roy, Subhrajit Basu, Arindam Hussain, Shaista School of Electrical and Electronic Engineering IEEE Biomedical Circuits and Systems Conference (BioCAS) (2013 : Rotterdam, the Netherlands) DRNTU::Engineering::Electrical and electronic engineering In this article, we describe a new neuro-inspired, hardware-friendly readout stage for the liquid state machine (LSM) that is suitable for on-sensor computing in resource constrained applications. Compared to the state of the art parallel perceptron readout (PPR), our readout architecture and learning algorithm can attain better performance with significantly less synaptic resources making it attractive for VLSI implementation. Inspired by the nonlinear properties of dendrites in biological neurons, our readout stage incorporates neurons having multiple dendrites with a lumped nonlinearity (two compartment model). The number of synaptic connections on each branch is significantly lower than the total number of connections from the liquid neurons and the learning algorithm tries to find the best `combination' of input connections on each branch to reduce the error. Hence, the learning involves network rewiring (NRW) of the readout network similar to structural plasticity observed in its biological counterparts. We show that even while using binary synapses, our method can achieve 2.4 - 3.3 times less error compared to PPR using same number of high resolution synapses. Conversely, PPR requires 40-60 times more synapses to attain error levels comparable to our method. Accepted version 2014-06-04T01:45:54Z 2019-12-06T20:12:32Z 2014-06-04T01:45:54Z 2019-12-06T20:12:32Z 2013 2013 Conference Paper Roy, S., Basu, A. & Hussain, S. 2013. Hardware efficient, neuromorphic dendritically enhanced readout for liquid state machines. IEEE Biomedical Circuits and Systems Conference (BioCAS) 2013, 302 - 305. https://hdl.handle.net/10356/99863 http://hdl.handle.net/10220/19535 10.1109/BioCAS.2013.6679699 175591 en © 2013 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. The published version is available at: [http://dx.doi.org/10.1109/BioCAS.2013.6679699]. application/pdf |
institution |
Nanyang Technological University |
building |
NTU Library |
country |
Singapore |
collection |
DR-NTU |
language |
English |
topic |
DRNTU::Engineering::Electrical and electronic engineering |
spellingShingle |
DRNTU::Engineering::Electrical and electronic engineering Roy, Subhrajit Basu, Arindam Hussain, Shaista Hardware efficient, neuromorphic dendritically enhanced readout for liquid state machines |
description |
In this article, we describe a new neuro-inspired, hardware-friendly readout stage for the liquid state machine (LSM) that is suitable for on-sensor computing in resource constrained applications. Compared to the state of the art parallel perceptron readout (PPR), our readout architecture and learning algorithm can attain better performance with significantly less synaptic resources making it attractive for VLSI implementation. Inspired by the nonlinear properties of dendrites in biological neurons, our readout stage incorporates neurons having multiple dendrites with a lumped nonlinearity (two compartment model). The number of synaptic connections on each branch is significantly lower than the total number of connections from the liquid neurons and the learning algorithm tries to find the best `combination' of input connections on each branch to reduce the error. Hence, the learning involves network rewiring (NRW) of the readout network similar to structural plasticity observed in its biological counterparts. We show that even while using binary synapses, our method can achieve 2.4 - 3.3 times less error compared to PPR using same number of high resolution synapses. Conversely, PPR requires 40-60 times more synapses to attain error levels comparable to our method. |
author2 |
School of Electrical and Electronic Engineering |
author_facet |
School of Electrical and Electronic Engineering Roy, Subhrajit Basu, Arindam Hussain, Shaista |
format |
Conference or Workshop Item |
author |
Roy, Subhrajit Basu, Arindam Hussain, Shaista |
author_sort |
Roy, Subhrajit |
title |
Hardware efficient, neuromorphic dendritically enhanced readout for liquid state machines |
title_short |
Hardware efficient, neuromorphic dendritically enhanced readout for liquid state machines |
title_full |
Hardware efficient, neuromorphic dendritically enhanced readout for liquid state machines |
title_fullStr |
Hardware efficient, neuromorphic dendritically enhanced readout for liquid state machines |
title_full_unstemmed |
Hardware efficient, neuromorphic dendritically enhanced readout for liquid state machines |
title_sort |
hardware efficient, neuromorphic dendritically enhanced readout for liquid state machines |
publishDate |
2014 |
url |
https://hdl.handle.net/10356/99863 http://hdl.handle.net/10220/19535 |
_version_ |
1681035174261817344 |