Design of SRAM-based in-memory computing for machine learning applications

The well-known Moore's Law is about to end after CMOS devices using 7nm process technology are widely used. It is becoming more and more difficult to improve chip performance and reduce chip cost by reducing the feature size of the device. At the same time, with the development and rise of tech...

Full description

Saved in:
Bibliographic Details
Main Author: Sun, Shaofan
Other Authors: Kim Tae Hyoung
Format: Thesis-Master by Coursework
Language:English
Published: Nanyang Technological University 2021
Subjects:
Online Access:https://hdl.handle.net/10356/154268
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-154268
record_format dspace
spelling sg-ntu-dr.10356-1542682023-07-04T15:19:59Z Design of SRAM-based in-memory computing for machine learning applications Sun, Shaofan Kim Tae Hyoung School of Electrical and Electronic Engineering THKIM@ntu.edu.sg Engineering::Electrical and electronic engineering::Integrated circuits The well-known Moore's Law is about to end after CMOS devices using 7nm process technology are widely used. It is becoming more and more difficult to improve chip performance and reduce chip cost by reducing the feature size of the device. At the same time, with the development and rise of technologies such as artificial intelligence, Internet of Things, and big data, data has grown rapidly in the form of an almost exponential explosion. The computer system of the von Neumann architecture, which has dominated the computing field for decades, has problems with storage walls and power walls, which is called the von Neumann bottleneck. The so-called storage wall problem refers to the mismatch between the speed of the computing unit and the storage unit. The speed of the former is much faster than the latter, and the speed of the storage unit has become a bottleneck affecting the calculation speed. The power wall problem refers to the frequent transmission of data between physically separated computing units and storage units that consume a lot of power consumption. In order to solve the von Neumann bottleneck problem, the compute-in-memory, which can also be called the integrated circuit of storage and calculation, has been proposed. The compute-in-memory circuit refers to a circuit that directly uses memory for data calculation by fusing data storage and calculation together, avoiding the storage wall problem and power wall problem caused by the separation of the calculation unit and the storage unit. It is clever to solves the von Neumann bottleneck of the von Neumann architecture computer. This dissertation discusses a single-bit weighted multiplication and point multiplication circuit structure based on the SRAM standard 6T cell array, which has good linearity and dynamic range. And on the basis of this structure, a compute-in-memory circuit that can complete 2bit weight calculation is proposed. Compared with the single-bit compute-in-memory circuit, this circuit sacrifices part of the linearity, however, multiplication and point multiplication operations can be completed well, and this structure has the potential to be expanded into a compute-in-memory structure for arbitrary bit weight operations. Master of Science (Electronics) 2021-12-20T02:56:52Z 2021-12-20T02:56:52Z 2021 Thesis-Master by Coursework Sun, S. (2021). Design of SRAM-based in-memory computing for machine learning applications. Master's thesis, Nanyang Technological University, Singapore. https://hdl.handle.net/10356/154268 https://hdl.handle.net/10356/154268 en application/pdf Nanyang Technological University
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Engineering::Electrical and electronic engineering::Integrated circuits
spellingShingle Engineering::Electrical and electronic engineering::Integrated circuits
Sun, Shaofan
Design of SRAM-based in-memory computing for machine learning applications
description The well-known Moore's Law is about to end after CMOS devices using 7nm process technology are widely used. It is becoming more and more difficult to improve chip performance and reduce chip cost by reducing the feature size of the device. At the same time, with the development and rise of technologies such as artificial intelligence, Internet of Things, and big data, data has grown rapidly in the form of an almost exponential explosion. The computer system of the von Neumann architecture, which has dominated the computing field for decades, has problems with storage walls and power walls, which is called the von Neumann bottleneck. The so-called storage wall problem refers to the mismatch between the speed of the computing unit and the storage unit. The speed of the former is much faster than the latter, and the speed of the storage unit has become a bottleneck affecting the calculation speed. The power wall problem refers to the frequent transmission of data between physically separated computing units and storage units that consume a lot of power consumption. In order to solve the von Neumann bottleneck problem, the compute-in-memory, which can also be called the integrated circuit of storage and calculation, has been proposed. The compute-in-memory circuit refers to a circuit that directly uses memory for data calculation by fusing data storage and calculation together, avoiding the storage wall problem and power wall problem caused by the separation of the calculation unit and the storage unit. It is clever to solves the von Neumann bottleneck of the von Neumann architecture computer. This dissertation discusses a single-bit weighted multiplication and point multiplication circuit structure based on the SRAM standard 6T cell array, which has good linearity and dynamic range. And on the basis of this structure, a compute-in-memory circuit that can complete 2bit weight calculation is proposed. Compared with the single-bit compute-in-memory circuit, this circuit sacrifices part of the linearity, however, multiplication and point multiplication operations can be completed well, and this structure has the potential to be expanded into a compute-in-memory structure for arbitrary bit weight operations.
author2 Kim Tae Hyoung
author_facet Kim Tae Hyoung
Sun, Shaofan
format Thesis-Master by Coursework
author Sun, Shaofan
author_sort Sun, Shaofan
title Design of SRAM-based in-memory computing for machine learning applications
title_short Design of SRAM-based in-memory computing for machine learning applications
title_full Design of SRAM-based in-memory computing for machine learning applications
title_fullStr Design of SRAM-based in-memory computing for machine learning applications
title_full_unstemmed Design of SRAM-based in-memory computing for machine learning applications
title_sort design of sram-based in-memory computing for machine learning applications
publisher Nanyang Technological University
publishDate 2021
url https://hdl.handle.net/10356/154268
_version_ 1772829162854154240