Adaptive loss-aware quantization for multi-bit networks
We investigate the compression of deep neural networks by quantizing their weights and activations into multiple binary bases, known as multi-bit networks (MBNs), which accelerate the inference and reduce the storage for the deployment on low-resource mobile and embedded platforms. We propose Adapti...
Saved in:
Main Authors: | , , , |
---|---|
Format: | text |
Language: | English |
Published: |
Institutional Knowledge at Singapore Management University
2020
|
Subjects: | |
Online Access: | https://ink.library.smu.edu.sg/sis_research/5251 https://ink.library.smu.edu.sg/context/sis_research/article/6254/viewcontent/cvpr20_qu.pdf |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Singapore Management University |
Language: | English |
id |
sg-smu-ink.sis_research-6254 |
---|---|
record_format |
dspace |
spelling |
sg-smu-ink.sis_research-62542021-01-28T07:38:58Z Adaptive loss-aware quantization for multi-bit networks QU, Zhongnan ZHOU, Zimu CHENG, Yun THIELE, Lothar We investigate the compression of deep neural networks by quantizing their weights and activations into multiple binary bases, known as multi-bit networks (MBNs), which accelerate the inference and reduce the storage for the deployment on low-resource mobile and embedded platforms. We propose Adaptive Loss-aware Quantization (ALQ), a new MBN quantization pipeline that is able to achieve an average bitwidth below one-bit without notable loss in inference accuracy. Unlike previous MBN quantization solutions that train a quantizer by minimizing the error to reconstruct full precision weights, ALQ directly minimizes the quantizationinduced error on the loss function involving neither gradient approximation nor full precision maintenance. ALQ also exploits strategies including adaptive bitwidth, smooth bitwidth reduction, and iterative trained quantization to allow a smaller network size without loss in accuracy. Experiment results on popular image datasets show that ALQ outperforms state-of-the-art compressed networks in terms of both storage and accuracy. 2020-06-01T07:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/5251 info:doi/10.1109/CVPR42600.2020.00801 https://ink.library.smu.edu.sg/context/sis_research/article/6254/viewcontent/cvpr20_qu.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Quantization (signal) Optimization Neural networks Adaptive systems Microprocessors Training Tensile stress Databases and Information Systems Numerical Analysis and Scientific Computing |
institution |
Singapore Management University |
building |
SMU Libraries |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
SMU Libraries |
collection |
InK@SMU |
language |
English |
topic |
Quantization (signal) Optimization Neural networks Adaptive systems Microprocessors Training Tensile stress Databases and Information Systems Numerical Analysis and Scientific Computing |
spellingShingle |
Quantization (signal) Optimization Neural networks Adaptive systems Microprocessors Training Tensile stress Databases and Information Systems Numerical Analysis and Scientific Computing QU, Zhongnan ZHOU, Zimu CHENG, Yun THIELE, Lothar Adaptive loss-aware quantization for multi-bit networks |
description |
We investigate the compression of deep neural networks by quantizing their weights and activations into multiple binary bases, known as multi-bit networks (MBNs), which accelerate the inference and reduce the storage for the deployment on low-resource mobile and embedded platforms. We propose Adaptive Loss-aware Quantization (ALQ), a new MBN quantization pipeline that is able to achieve an average bitwidth below one-bit without notable loss in inference accuracy. Unlike previous MBN quantization solutions that train a quantizer by minimizing the error to reconstruct full precision weights, ALQ directly minimizes the quantizationinduced error on the loss function involving neither gradient approximation nor full precision maintenance. ALQ also exploits strategies including adaptive bitwidth, smooth bitwidth reduction, and iterative trained quantization to allow a smaller network size without loss in accuracy. Experiment results on popular image datasets show that ALQ outperforms state-of-the-art compressed networks in terms of both storage and accuracy. |
format |
text |
author |
QU, Zhongnan ZHOU, Zimu CHENG, Yun THIELE, Lothar |
author_facet |
QU, Zhongnan ZHOU, Zimu CHENG, Yun THIELE, Lothar |
author_sort |
QU, Zhongnan |
title |
Adaptive loss-aware quantization for multi-bit networks |
title_short |
Adaptive loss-aware quantization for multi-bit networks |
title_full |
Adaptive loss-aware quantization for multi-bit networks |
title_fullStr |
Adaptive loss-aware quantization for multi-bit networks |
title_full_unstemmed |
Adaptive loss-aware quantization for multi-bit networks |
title_sort |
adaptive loss-aware quantization for multi-bit networks |
publisher |
Institutional Knowledge at Singapore Management University |
publishDate |
2020 |
url |
https://ink.library.smu.edu.sg/sis_research/5251 https://ink.library.smu.edu.sg/context/sis_research/article/6254/viewcontent/cvpr20_qu.pdf |
_version_ |
1770575349671788544 |