Distributed In-Memory Computing on Binary Memristor-Crossbar for Machine Learning
The recent emerging memristor can provide non-volatile memory storage but also intrinsic computing for matrix-vector multiplication, which is ideal for low-power and high-throughput data analytics accelerator performed in memory. However, the existing memristor-crossbar based computing is mainly ass...
Saved in:
Main Authors: | , , |
---|---|
Other Authors: | |
Format: | Book |
Language: | English |
Published: |
Springer
2017
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/86062 http://hdl.handle.net/10220/43929 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
id |
sg-ntu-dr.10356-86062 |
---|---|
record_format |
dspace |
spelling |
sg-ntu-dr.10356-860622020-03-07T14:05:46Z Distributed In-Memory Computing on Binary Memristor-Crossbar for Machine Learning Yu, Hao Ni, Leibin Huang, Hantao Vaidyanathan, Sundarapandian Volos, Christos School of Electrical and Electronic Engineering Memristor-crossbar Machine learning The recent emerging memristor can provide non-volatile memory storage but also intrinsic computing for matrix-vector multiplication, which is ideal for low-power and high-throughput data analytics accelerator performed in memory. However, the existing memristor-crossbar based computing is mainly assumed as a multi-level analog computing, whose result is sensitive to process non-uniformity as well as additional overhead from AD-conversion and I/O. In this chapter, we explore the matrix-vector multiplication accelerator on a binary memristor-crossbar with adaptive 1-bit-comparator based parallel conversion. Moreover, a distributed in-memory computing architecture is also developed with according control protocol. Both memory array and logic accelerator are implemented on the binary memristor-crossbar, where logic-memory pair can be distributed with protocol of control bus. Experiment results have shown that compared to the analog memristor-crossbar, the proposed binary memristor-crossbar can achieve significant area-saving with better calculation accuracy. Moreover, significant speedup can be achieved for matrix-vector multiplication in the neuron-network based machine learning such that the overall training and testing time can be both reduced respectively. In addition, large energy saving can be also achieved when compared to the traditional CMOS-based out-of-memory computing architecture. 2017-10-19T04:11:32Z 2019-12-06T16:15:16Z 2017-10-19T04:11:32Z 2019-12-06T16:15:16Z 2017 Book Yu, H., Ni, L., & Huang, H. (2017). Distributed In-Memory Computing on Binary Memristor-Crossbar for Machine Learning. In S. Vaidyanathan & C. Volos (Eds.), Advances in Memristors, Memristive Devices and Systems (pp.275-304). Cham, Switzerland: Springer International Publishing. 978-3-319-51723-0 https://hdl.handle.net/10356/86062 http://hdl.handle.net/10220/43929 10.1007/978-3-319-51724-7_12 en © 2017 Springer International Publishing. This is the author created version of a work that has been peer reviewed and accepted for publication by Advances in Memristors, Memristive Devices and Systems, Springer International Publishing. It incorporates referee’s comments but changes resulting from the publishing process, such as copyediting, structural formatting, may not be reflected in this document. The published version is available at: [http://dx.doi.org/10.1007/978-3-319-51724-7_12]. 29 p. application/pdf Springer |
institution |
Nanyang Technological University |
building |
NTU Library |
country |
Singapore |
collection |
DR-NTU |
language |
English |
topic |
Memristor-crossbar Machine learning |
spellingShingle |
Memristor-crossbar Machine learning Yu, Hao Ni, Leibin Huang, Hantao Distributed In-Memory Computing on Binary Memristor-Crossbar for Machine Learning |
description |
The recent emerging memristor can provide non-volatile memory storage but also intrinsic computing for matrix-vector multiplication, which is ideal for low-power and high-throughput data analytics accelerator performed in memory. However, the existing memristor-crossbar based computing is mainly assumed as a multi-level analog computing, whose result is sensitive to process non-uniformity as well as additional overhead from AD-conversion and I/O. In this chapter, we explore the matrix-vector multiplication accelerator on a binary memristor-crossbar with adaptive 1-bit-comparator based parallel conversion. Moreover, a distributed in-memory computing architecture is also developed with according control protocol. Both memory array and logic accelerator are implemented on the binary memristor-crossbar, where logic-memory pair can be distributed with protocol of control bus. Experiment results have shown that compared to the analog memristor-crossbar, the proposed binary memristor-crossbar can achieve significant area-saving with better calculation accuracy. Moreover, significant speedup can be achieved for matrix-vector multiplication in the neuron-network based machine learning such that the overall training and testing time can be both reduced respectively. In addition, large energy saving can be also achieved when compared to the traditional CMOS-based out-of-memory computing architecture. |
author2 |
Vaidyanathan, Sundarapandian |
author_facet |
Vaidyanathan, Sundarapandian Yu, Hao Ni, Leibin Huang, Hantao |
format |
Book |
author |
Yu, Hao Ni, Leibin Huang, Hantao |
author_sort |
Yu, Hao |
title |
Distributed In-Memory Computing on Binary Memristor-Crossbar for Machine Learning |
title_short |
Distributed In-Memory Computing on Binary Memristor-Crossbar for Machine Learning |
title_full |
Distributed In-Memory Computing on Binary Memristor-Crossbar for Machine Learning |
title_fullStr |
Distributed In-Memory Computing on Binary Memristor-Crossbar for Machine Learning |
title_full_unstemmed |
Distributed In-Memory Computing on Binary Memristor-Crossbar for Machine Learning |
title_sort |
distributed in-memory computing on binary memristor-crossbar for machine learning |
publisher |
Springer |
publishDate |
2017 |
url |
https://hdl.handle.net/10356/86062 http://hdl.handle.net/10220/43929 |
_version_ |
1681048400202563584 |