Recovering accuracy of RRAM-based CIM for binarized neural network via Chip-in-the-loop training
Resistive random access memory (RRAM) based computing-in-memory (CIM) is attractive for edge artificial intelligence (AI) applications, thanks to its excellent energy efficiency, compactness and high parallelism in matrix vector multiplication (MatVec) operations. However, existing RRAM-based CIM de...
Saved in:
Main Authors: | , , , , |
---|---|
Other Authors: | |
Format: | Conference or Workshop Item |
Language: | English |
Published: |
2022
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/159308 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | Resistive random access memory (RRAM) based computing-in-memory (CIM) is attractive for edge artificial intelligence (AI) applications, thanks to its excellent energy efficiency, compactness and high parallelism in matrix vector multiplication (MatVec) operations. However, existing RRAM-based CIM designs often require complex programming scheme to precisely control the RRAM cells to reach the desired resistance states so that the neural network classification accuracy is maintained. This leads to large area and energy overhead as well as low RRAM area utilization. Hence, compact RRAMbased
CIM with simple pulse-based programming scheme is thus more desirable. To achieve this, we propose a chip-in-the-loop training approach to compensate for the network performance drop due to the stochastic behavior of the RRAM cells. Note that, although the target RRAM cell here is a two-state RRAM (i.e binary, having only high and low resistance states), their inherent analog resistance values are used in the CIM operation. Our experiment using a 4-layer fully-connected binary neural network (BNN) showed that after retraining, the RRAM-based network accuracy can be recovered, regardless of the RRAM resistance distribution and RHRS/RLRS resistance ratio. |
---|