A highly-parallel and energy-efficient 3D multi-layer CMOS-RRAM accelerator for tensorized neural network
It is a grand challenge to develop highly parallel yet energy-efficient machine learning hardware accelerator. This paper introduces a three-dimensional (3-D) multilayer CMOSRRAM accelerator for atensorized neural network. Highly parallel matrix-vector multiplication can be performed with low power...
Saved in:
Main Authors: | Huang, Hantao, Ni, Leibin, Wang, Kanwen, Wang, Yuangang, Yu, Hao |
---|---|
其他作者: | School of Electrical and Electronic Engineering |
格式: | Article |
語言: | English |
出版: |
2018
|
主題: | |
在線閱讀: | https://hdl.handle.net/10356/87049 http://hdl.handle.net/10220/45222 |
標簽: |
添加標簽
沒有標簽, 成為第一個標記此記錄!
|
相似書籍
-
Distributed In-Memory Computing on Binary RRAM Crossbar
由: Ni, Leibin, et al.
出版: (2017) -
Ultra-high-speed accelerator architecture for convolutional neural network based on processing-in-memory using resistive random access memory
由: Wang, Hongzhe, et al.
出版: (2023) -
Simulation of 1T1R(one-transistor one RRAM) memory cell
由: Shi, Quan
出版: (2024) -
Taylor's theorem: a new perspective for neural tensor networks
由: Li, Wei, et al.
出版: (2022) -
Tensor factorization for low-rank tensor completion
由: ZHOU, Pan, et al.
出版: (2017)