Spin-based neuromorphic computing (simulation)
In the recent year of artificial intelligence and spintronics memory device technology advancement, there is a potential to create high performance and low power neuromorphic network, a hardware-based implementation of neural network. Spintronics memory device is involved in the design of a synapse...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Final Year Project |
Language: | English |
Published: |
Nanyang Technological University
2020
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/139185 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | In the recent year of artificial intelligence and spintronics memory device technology advancement, there is a potential to create high performance and low power neuromorphic network, a hardware-based implementation of neural network. Spintronics memory device is involved in the design of a synapse in a neuromorphic network. In this project, we designed 4 versions of neuromorphic network, trained MNIST dataset off-ship on TensorFlow platform, post-processed the trained weights into 8 levels, discretised form, corresponding to the weight range representable by SOT/SHE MRAM, before simulating the same dataset on the neuromorphic network in Cadence Virtuoso. The intermediate output of TensorFlow was used to simulate the 2nd layer (10 by 20 synapses) and achieved an accuracy of 81.02% vs TensorFlow model accuracy of 80.24%. We have also attempted to simulate a full, multi-layer network but faced with scaling challenges. Furthermore, we studied the challenges posed by the practical, manufacturable and non-ideal neuromorphic network in detail. Future work may include sorting out the shortcoming in the current implementation of neuromorphic network, extending to very large scale simulation, simulating the behaviour model of read/write cycles of MRAM in Cadence Virtuoso, conversion to spike-based (SNN) architecture and ultimately on-chip training of the SNN network. |
---|