Deep neural network compression : from sufficient to scarce data
The success of overparameterized deep neural networks (DNNs) poses a great challenge to deploy computationally expensive models on edge devices. Numerous model compression (pruning, quantization) methods have been proposed to overcome this challenge: Pruning eliminates unimportant parameters, while...
Saved in:
Main Author: | Chen, Shangyu |
---|---|
Other Authors: | Sinno Jialin Pan |
Format: | Thesis-Doctor of Philosophy |
Language: | English |
Published: |
Nanyang Technological University
2021
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/146245 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Similar Items
-
Deep neural network compression for pixel-level vision tasks
by: He, Wei
Published: (2021) -
Deep neural networks for identifying causal relations in texts
by: Chen, Siyuan
Published: (2023) -
Hybrid deep neural network and deep reinforcement learning for algorithmic finance
by: Ooi, Min Hui
Published: (2022) -
Deep neural networks for time series classification
by: Cheng, Wen Xin
Published: (2023) -
Using deep neural networks for chess position evaluation
by: Phang, Benito Yan Feng
Published: (2023)