In-memory computing
Neural networks are a subset of machine learning that are currently rapidly being deployed for various purposes, propelling the growth of science and technology. Current hardware limitations prove to be a challenge for efficient neural network implementations due to lack of several bottlenecks. Neur...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Final Year Project |
Language: | English |
Published: |
Nanyang Technological University
2022
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/158182 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | Neural networks are a subset of machine learning that are currently rapidly being deployed for various purposes, propelling the growth of science and technology. Current hardware limitations prove to be a challenge for efficient neural network implementations due to lack of several bottlenecks. Neural hardware accelerators are currently being researched and developed to aid in the training and deployment of neural networks.
The objective of this final year project is to create an in-memory computing reconfigurable bitcell with variable bit weight precision to perform multiply-accumulate for use in convolutional neural networks. It is to be tested and implemented with proper input/output circuitry in the form of a macro. The design and implementations are to be performed using TSMC 65 nm technology in Cadence.
This report presents the working principle of the bitcell, with the design and simulation of each of its components, before integrating it into an array. Process-voltage-temperature variations were also carried out to ensure the functionality of a bitcell array. The peripheral circuits, such as the input and output buffer registers were also designed and tested.
A 128 x 128 bitcell array was designed and utilized in the simulation of an edge-detection CNN model. The project also highlights the future of neural network hardware accelerators and their potential to achieve greater processing power and efficiency. |
---|