XOR-Net : an efficient computation pipeline for binary neural network inference on edge devices
Accelerating the inference of Convolution Neural Networks (CNNs) on edge devices is essential due to the small memory size and poor computation capability of these devices. Network quantization methods such as XNOR-Net, Bi-Real-Net, and XNOR-Net++ reduce the memory usage of CNNs by binarizing the CN...
Saved in:
Main Authors: | Zhu, Shien, Duong, Luan H. K., Liu, Weichen |
---|---|
Other Authors: | School of Computer Science and Engineering |
Format: | Conference or Workshop Item |
Language: | English |
Published: |
2020
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/145503 https://doi.org/10.21979/N9/XEH3D1 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Similar Items
-
TAB : unified and optimized ternary, binary and mixed-precision neural network inference on the edge
by: Zhu, Shien, et al.
Published: (2022) -
EdgeNAS: discovering efficient neural architectures for edge systems
by: Luo, Xiangzhong, et al.
Published: (2023) -
Edge detection using a neural network
by: Srinivasan, V., et al.
Published: (2014) -
NON-VOLATILE IN-MEMORY COMPUTING WITH SKYRMIONS AND PHASE CHANGE MEMORIES
by: MIRIYALA VENKATA PAVAN KUMAR
Published: (2021) -
Parameterized DNN design for identifying the resource limitations of edge deep learning hardware
by: Aung, Shin Thant
Published: (2024)