Framework for mapping computing-in-memory to basic neural networks

With the advent of the era of big data, the application of neural networks on edge devices has received extensive attention. However, the traditional Von Neumann architecture shows the disadvantages of high latency, low throughput, and decreasing energy efficiency in the data-intensive algorithms, s...

全面介紹

Saved in:
書目詳細資料
主要作者: Shang, Hongyang
其他作者: Kim Tae Hyoung
格式: Thesis-Master by Coursework
語言:English
出版: Nanyang Technological University 2022
主題:
在線閱讀:https://hdl.handle.net/10356/159014
標簽: 添加標簽
沒有標簽, 成為第一個標記此記錄!
機構: Nanyang Technological University
語言: English
實物特徵
總結:With the advent of the era of big data, the application of neural networks on edge devices has received extensive attention. However, the traditional Von Neumann architecture shows the disadvantages of high latency, low throughput, and decreasing energy efficiency in the data-intensive algorithms, so it is of great significance to develop new computing architectures. Computing-in-memory architecture has been proposed as a practical neural network accelerator with the natural advantage for multiply-accumulate (MAC) operations caused by its parallel computing structure. At present, most of the research on CIM chips focuses on the development of the memory elements and the design of computing circuits, and less work is done on automated tools that support the CIM chip design. Therefore, this paper proposes a software framework for mapping CIM to basic neural networks. The mapping framework is a semi-automatic data mapping workflow, which is mainly composed of two sub-tasks: the neural network quantization and the neural network mapping. It can achieve quantization with arbitrary bit-width precision, and perform flexible data mapping scheme according to the design of CIM macros. This work can help CIM chip developers to verify the calculation results of chips, and promote the development of CIM chip automation tools.