SRAM based computing-in-memory for tiny machine learning
This dissertation investigates the potential of Computing-In-Memory (CIM) using Static Random-Access Memory (SRAM) to address the limitations of the Von Neumann architecture and to increase miniaturisation. This research aims to overcome this bottleneck by enabling in-memory computation for tiny mac...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Thesis-Master by Coursework |
Language: | English |
Published: |
Nanyang Technological University
2024
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/175498 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | This dissertation investigates the potential of Computing-In-Memory (CIM) using Static Random-Access Memory (SRAM) to address the limitations of the Von Neumann architecture and to increase miniaturisation. This research aims to overcome this bottleneck by enabling in-memory computation for tiny machine learning applications like BNN. Two SRAM cell architectures: 6-transistor (6T) and 8-transistor (8T) are investigated. Simulations performed using Cadence Virtuoso using TSMC 65nm Library demonstrate that both cells give correct value for write, hold, read operations but for MAC operations the 8T cell exhibits superior stability compared to the 6T design. Furthermore, the implementation of a 64-bit memory array capable of performing 8-row MAC operations was investigated . This paves the way for efficient in-memory computing suitable for Tiny machine learning applications. |
---|