An energy-efficient convolution unit for depthwise separable convolutional neural networks

High performance but computationally expensive Convolutional Neural Networks (CNNs) require both algorithmic and custom hardware improvement to reduce model size and to improve energy efficiency for edge computing applications. Recent CNN architectures employ depthwise separable convolution to reduc...

Full description

Saved in:
Bibliographic Details
Main Authors: Chong, Yi Sheng, Goh, Wang Ling, Ong, Yew-Soon, Nambiar, Vishnu P., Do, Anh Tuan
Other Authors: Interdisciplinary Graduate School (IGS)
Format: Conference or Workshop Item
Language:English
Published: 2021
Subjects:
Online Access:https://hdl.handle.net/10356/152096
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:High performance but computationally expensive Convolutional Neural Networks (CNNs) require both algorithmic and custom hardware improvement to reduce model size and to improve energy efficiency for edge computing applications. Recent CNN architectures employ depthwise separable convolution to reduce the total number of weights and MAC operations. However, depthwise separable convolution workload does not run efficiently in existing CNN accelerators. This paper proposes an energy-efficient CONV unit for pointwise and depthwise operation. The CONV unit utilizes weight stationary to enable high efficiency. The row partial sum reduction is engaged to increase parallelism in pointwise convolution thereby lightening the memory requirements on output partial sums. Our design achieves a maximum efficiency of 3.17 TOPS/W at 0.85V/40nm CMOS which is well-suited for energy constrained edge computing applications.