A highly-parallel and energy-efficient 3D multi-layer CMOS-RRAM accelerator for tensorized neural network
It is a grand challenge to develop highly parallel yet energy-efficient machine learning hardware accelerator. This paper introduces a three-dimensional (3-D) multilayer CMOSRRAM accelerator for atensorized neural network. Highly parallel matrix-vector multiplication can be performed with low power...
Saved in:
Main Authors: | Huang, Hantao, Ni, Leibin, Wang, Kanwen, Wang, Yuangang, Yu, Hao |
---|---|
Other Authors: | School of Electrical and Electronic Engineering |
Format: | Article |
Language: | English |
Published: |
2018
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/87049 http://hdl.handle.net/10220/45222 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Similar Items
-
Distributed In-Memory Computing on Binary RRAM Crossbar
by: Ni, Leibin, et al.
Published: (2017) -
Ultra-high-speed accelerator architecture for convolutional neural network based on processing-in-memory using resistive random access memory
by: Wang, Hongzhe, et al.
Published: (2023) -
Simulation of 1T1R(one-transistor one RRAM) memory cell
by: Shi, Quan
Published: (2024) -
Taylor's theorem: a new perspective for neural tensor networks
by: Li, Wei, et al.
Published: (2022) -
Tensor factorization for low-rank tensor completion
by: ZHOU, Pan, et al.
Published: (2017)