Neural network accelerator design using computing in memory
Image Processing has become an extremely popular field of application for Neural Networks. Convolution is a basic and critical operation for pattern recognition and breaking down images into feature maps. Similarly, “Deconvolution” is a critical step for constructing new data points out of given inp...
Saved in:
Main Author: | Seah, Leon Shin Yang |
---|---|
Other Authors: | Weichen Liu |
Format: | Final Year Project |
Language: | English |
Published: |
Nanyang Technological University
2021
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/148415 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Similar Items
-
FAT: an in-memory accelerator with fast addition for ternary weight neural networks
by: Zhu, Shien, et al.
Published: (2022) -
A tool for visualization of memory access patterns
by: Chan, Eugene Yew Koon.
Published: (2013) -
Crossbar-aligned & integer-only neural network compression for efficient in-memory acceleration
by: Huai, Shuo, et al.
Published: (2023) -
iMAT: energy-efficient in-memory acceleration for ternary neural networks with sparse dot product
by: Zhu, Shien, et al.
Published: (2023) -
Deep learning acceleration: from quantization to in-memory computing
by: Zhu, Shien
Published: (2022)