In-memory computing

Neural networks are a subset of machine learning that are currently rapidly being deployed for various purposes, propelling the growth of science and technology. Current hardware limitations prove to be a challenge for efficient neural network implementations due to lack of several bottlenecks. Neur...

Full description

Saved in:
Bibliographic Details
Main Author: Swaminathan, Aravind Raj
Other Authors: Kim Tae Hyoung
Format: Final Year Project
Language:English
Published: Nanyang Technological University 2022
Subjects:
Online Access:https://hdl.handle.net/10356/158182
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-158182
record_format dspace
spelling sg-ntu-dr.10356-1581822023-07-07T19:28:41Z In-memory computing Swaminathan, Aravind Raj Kim Tae Hyoung School of Electrical and Electronic Engineering THKIM@ntu.edu.sg Engineering::Electrical and electronic engineering Neural networks are a subset of machine learning that are currently rapidly being deployed for various purposes, propelling the growth of science and technology. Current hardware limitations prove to be a challenge for efficient neural network implementations due to lack of several bottlenecks. Neural hardware accelerators are currently being researched and developed to aid in the training and deployment of neural networks. The objective of this final year project is to create an in-memory computing reconfigurable bitcell with variable bit weight precision to perform multiply-accumulate for use in convolutional neural networks. It is to be tested and implemented with proper input/output circuitry in the form of a macro. The design and implementations are to be performed using TSMC 65 nm technology in Cadence. This report presents the working principle of the bitcell, with the design and simulation of each of its components, before integrating it into an array. Process-voltage-temperature variations were also carried out to ensure the functionality of a bitcell array. The peripheral circuits, such as the input and output buffer registers were also designed and tested. A 128 x 128 bitcell array was designed and utilized in the simulation of an edge-detection CNN model. The project also highlights the future of neural network hardware accelerators and their potential to achieve greater processing power and efficiency. Bachelor of Engineering (Electrical and Electronic Engineering) 2022-05-27T05:39:48Z 2022-05-27T05:39:48Z 2022 Final Year Project (FYP) Swaminathan, A. R. (2022). In-memory computing. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/158182 https://hdl.handle.net/10356/158182 en A2105-211 application/pdf Nanyang Technological University
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Engineering::Electrical and electronic engineering
spellingShingle Engineering::Electrical and electronic engineering
Swaminathan, Aravind Raj
In-memory computing
description Neural networks are a subset of machine learning that are currently rapidly being deployed for various purposes, propelling the growth of science and technology. Current hardware limitations prove to be a challenge for efficient neural network implementations due to lack of several bottlenecks. Neural hardware accelerators are currently being researched and developed to aid in the training and deployment of neural networks. The objective of this final year project is to create an in-memory computing reconfigurable bitcell with variable bit weight precision to perform multiply-accumulate for use in convolutional neural networks. It is to be tested and implemented with proper input/output circuitry in the form of a macro. The design and implementations are to be performed using TSMC 65 nm technology in Cadence. This report presents the working principle of the bitcell, with the design and simulation of each of its components, before integrating it into an array. Process-voltage-temperature variations were also carried out to ensure the functionality of a bitcell array. The peripheral circuits, such as the input and output buffer registers were also designed and tested. A 128 x 128 bitcell array was designed and utilized in the simulation of an edge-detection CNN model. The project also highlights the future of neural network hardware accelerators and their potential to achieve greater processing power and efficiency.
author2 Kim Tae Hyoung
author_facet Kim Tae Hyoung
Swaminathan, Aravind Raj
format Final Year Project
author Swaminathan, Aravind Raj
author_sort Swaminathan, Aravind Raj
title In-memory computing
title_short In-memory computing
title_full In-memory computing
title_fullStr In-memory computing
title_full_unstemmed In-memory computing
title_sort in-memory computing
publisher Nanyang Technological University
publishDate 2022
url https://hdl.handle.net/10356/158182
_version_ 1772826994106433536