An energy-efficient convolution unit for depthwise separable convolutional neural networks

High performance but computationally expensive Convolutional Neural Networks (CNNs) require both algorithmic and custom hardware improvement to reduce model size and to improve energy efficiency for edge computing applications. Recent CNN architectures employ depthwise separable convolution to reduc...

全面介紹

Saved in:
書目詳細資料
Main Authors: Chong, Yi Sheng, Goh, Wang Ling, Ong, Yew-Soon, Nambiar, Vishnu P., Do, Anh Tuan
其他作者: Interdisciplinary Graduate School (IGS)
格式: Conference or Workshop Item
語言:English
出版: 2021
主題:
在線閱讀:https://hdl.handle.net/10356/152096
標簽: 添加標簽
沒有標簽, 成為第一個標記此記錄!
實物特徵
總結:High performance but computationally expensive Convolutional Neural Networks (CNNs) require both algorithmic and custom hardware improvement to reduce model size and to improve energy efficiency for edge computing applications. Recent CNN architectures employ depthwise separable convolution to reduce the total number of weights and MAC operations. However, depthwise separable convolution workload does not run efficiently in existing CNN accelerators. This paper proposes an energy-efficient CONV unit for pointwise and depthwise operation. The CONV unit utilizes weight stationary to enable high efficiency. The row partial sum reduction is engaged to increase parallelism in pointwise convolution thereby lightening the memory requirements on output partial sums. Our design achieves a maximum efficiency of 3.17 TOPS/W at 0.85V/40nm CMOS which is well-suited for energy constrained edge computing applications.