Deep learning acceleration: from quantization to in-memory computing
Deep learning has demonstrated high accuracy and efficiency in various applications. For example, Convolutional Neural Networks (CNNs) widely adopted in Computer Vision (CV) and Transformers broadly applied in Natural Language Processing (NLP) are representative deep learning models. Deep learning m...
Saved in:
Main Author: | Zhu, Shien |
---|---|
Other Authors: | Weichen Liu |
Format: | Thesis-Doctor of Philosophy |
Language: | English |
Published: |
Nanyang Technological University
2022
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/163448 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Similar Items
-
FAT: an in-memory accelerator with fast addition for ternary weight neural networks
by: Zhu, Shien, et al.
Published: (2022) -
iMAD: an in-memory accelerator for AdderNet with efficient 8-bit addition and subtraction operations
by: Zhu, Shien, et al.
Published: (2022) -
iMAT: energy-efficient in-memory acceleration for ternary neural networks with sparse dot product
by: Zhu, Shien, et al.
Published: (2023) -
Deep neuromorphic controller with dynamic topology for aerial robots
by: Dhanetwal, Manish
Published: (2021) -
Embedded accelerators
by: Thambipillai Srikanthan.
Published: (2009)