CRIMP: compact & reliable DNN inference on in-memory processing via crossbar-aligned compression and non-ideality adaptation
Crossbar-based In-Memory Processing (IMP) accelerators have been widely adopted to achieve high-speed and low-power computing, especially for deep neural network (DNN) models with numerous weights and high computational complexity. However, the floating-point (FP) arithmetic is not compatible with c...
Saved in:
Main Authors: | Huai, Shuo, Kong, Hao, Luo, Xiangzhong, Li, Shiqing, Subramaniam, Ravi, Makaya, Christian, Lin, Qian, Liu, Weichen |
---|---|
Other Authors: | School of Computer Science and Engineering |
Format: | Article |
Language: | English |
Published: |
2023
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/171633 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Similar Items
-
Crossbar-aligned & integer-only neural network compression for efficient in-memory acceleration
by: Huai, Shuo, et al.
Published: (2023) -
EdgeCompress: coupling multi-dimensional model compression and dynamic inference for EdgeAI
by: Kong, Hao, et al.
Published: (2023) -
Latency-constrained DNN architecture learning for edge systems using zerorized batch normalization
by: Huai, Shuo, et al.
Published: (2023) -
Self crimped and aligned fibers
by: Senthilram, T., et al.
Published: (2014) -
EvoLP: self-evolving latency predictor for model compression in real-time edge systems
by: Huai, Shuo, et al.
Published: (2023)