Hardware efficient, neuromorphic dendritically enhanced readout for liquid state machines

In this article, we describe a new neuro-inspired, hardware-friendly readout stage for the liquid state machine (LSM) that is suitable for on-sensor computing in resource constrained applications. Compared to the state of the art parallel perceptron readout (PPR), our readout architecture and learni...

Full description

Saved in:
Bibliographic Details
Main Authors: Roy, Subhrajit, Basu, Arindam, Hussain, Shaista
Other Authors: School of Electrical and Electronic Engineering
Format: Conference or Workshop Item
Language:English
Published: 2014
Subjects:
Online Access:https://hdl.handle.net/10356/99863
http://hdl.handle.net/10220/19535
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:In this article, we describe a new neuro-inspired, hardware-friendly readout stage for the liquid state machine (LSM) that is suitable for on-sensor computing in resource constrained applications. Compared to the state of the art parallel perceptron readout (PPR), our readout architecture and learning algorithm can attain better performance with significantly less synaptic resources making it attractive for VLSI implementation. Inspired by the nonlinear properties of dendrites in biological neurons, our readout stage incorporates neurons having multiple dendrites with a lumped nonlinearity (two compartment model). The number of synaptic connections on each branch is significantly lower than the total number of connections from the liquid neurons and the learning algorithm tries to find the best `combination' of input connections on each branch to reduce the error. Hence, the learning involves network rewiring (NRW) of the readout network similar to structural plasticity observed in its biological counterparts. We show that even while using binary synapses, our method can achieve 2.4 - 3.3 times less error compared to PPR using same number of high resolution synapses. Conversely, PPR requires 40-60 times more synapses to attain error levels comparable to our method.