Neural network accelerator design using computing in memory

Image Processing has become an extremely popular field of application for Neural Networks. Convolution is a basic and critical operation for pattern recognition and breaking down images into feature maps. Similarly, “Deconvolution” is a critical step for constructing new data points out of given inp...

Full description

Saved in:
Bibliographic Details
Main Author: Seah, Leon Shin Yang
Other Authors: Weichen Liu
Format: Final Year Project
Language:English
Published: Nanyang Technological University 2021
Subjects:
Online Access:https://hdl.handle.net/10356/148415
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-148415
record_format dspace
spelling sg-ntu-dr.10356-1484152021-05-01T13:23:00Z Neural network accelerator design using computing in memory Seah, Leon Shin Yang Weichen Liu School of Computer Science and Engineering liu@ntu.edu.sg Engineering::Computer science and engineering::Hardware::Memory structures Image Processing has become an extremely popular field of application for Neural Networks. Convolution is a basic and critical operation for pattern recognition and breaking down images into feature maps. Similarly, “Deconvolution” is a critical step for constructing new data points out of given inputs for tasks such as image recovery in generative adversarial networks. With new emerging technology, Resistive Random-Access-Memory (ReRAM) based architectures have been widely tested and delivers good performance in accelerating the convolution operation through a process of “Computing-in-Memory”. The “Deconvolution” operation, however, still suffers from high overheads due to significant redundancies (>80%) involving zero-multiplications and/or incompatibility with the existing ReRAM based accelerator designs. In this project, we will explore various state-of-the-art Computing-in-Memory accelerator designs that have been proposed and focus our efforts on evaluating and implementing a design for a “Deconvolution” accelerator called RED. Bachelor of Engineering (Computer Engineering) 2021-05-01T13:23:00Z 2021-05-01T13:23:00Z 2021 Final Year Project (FYP) Seah, L. S. Y. (2021). Neural network accelerator design using computing in memory. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/148415 https://hdl.handle.net/10356/148415 en SCSE20-0520 application/pdf Nanyang Technological University
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Engineering::Computer science and engineering::Hardware::Memory structures
spellingShingle Engineering::Computer science and engineering::Hardware::Memory structures
Seah, Leon Shin Yang
Neural network accelerator design using computing in memory
description Image Processing has become an extremely popular field of application for Neural Networks. Convolution is a basic and critical operation for pattern recognition and breaking down images into feature maps. Similarly, “Deconvolution” is a critical step for constructing new data points out of given inputs for tasks such as image recovery in generative adversarial networks. With new emerging technology, Resistive Random-Access-Memory (ReRAM) based architectures have been widely tested and delivers good performance in accelerating the convolution operation through a process of “Computing-in-Memory”. The “Deconvolution” operation, however, still suffers from high overheads due to significant redundancies (>80%) involving zero-multiplications and/or incompatibility with the existing ReRAM based accelerator designs. In this project, we will explore various state-of-the-art Computing-in-Memory accelerator designs that have been proposed and focus our efforts on evaluating and implementing a design for a “Deconvolution” accelerator called RED.
author2 Weichen Liu
author_facet Weichen Liu
Seah, Leon Shin Yang
format Final Year Project
author Seah, Leon Shin Yang
author_sort Seah, Leon Shin Yang
title Neural network accelerator design using computing in memory
title_short Neural network accelerator design using computing in memory
title_full Neural network accelerator design using computing in memory
title_fullStr Neural network accelerator design using computing in memory
title_full_unstemmed Neural network accelerator design using computing in memory
title_sort neural network accelerator design using computing in memory
publisher Nanyang Technological University
publishDate 2021
url https://hdl.handle.net/10356/148415
_version_ 1698713713340579840