Learning to prune deep neural networks via layer-wise optimal brain surgeon

How to develop slim and accurate deep neural networks has become crucial for real- world applications, especially for those employed in embedded systems. Though previous work along this research line has shown some promising results, most existing methods either fail to significantly compress a well...

Full description

Saved in:
Bibliographic Details
Main Authors: Dong, Xin, Chen, Shangyu, Pan, Sinno Jialin
Other Authors: School of Computer Science and Engineering
Format: Conference or Workshop Item
Language:English
Published: 2020
Subjects:
Online Access:https://hdl.handle.net/10356/137659
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-137659
record_format dspace
spelling sg-ntu-dr.10356-1376592020-04-08T01:57:02Z Learning to prune deep neural networks via layer-wise optimal brain surgeon Dong, Xin Chen, Shangyu Pan, Sinno Jialin School of Computer Science and Engineering 31st Conference on Neural Information Processing Systems (NIPS 2017) Engineering::Computer science and engineering Deep Neural Networks Layer-wise Optimal Brain Surgeon How to develop slim and accurate deep neural networks has become crucial for real- world applications, especially for those employed in embedded systems. Though previous work along this research line has shown some promising results, most existing methods either fail to significantly compress a well-trained deep network or require a heavy retraining process for the pruned deep network to re-boost its prediction performance. In this paper, we propose a new layer-wise pruning method for deep neural networks. In our proposed method, parameters of each individual layer are pruned independently based on second order derivatives of a layer-wise error function with respect to the corresponding parameters. We prove that the final prediction performance drop after pruning is bounded by a linear combination of the reconstructed errors caused at each layer. By controlling layer-wise errors properly, one only needs to perform a light retraining process on the pruned network to resume its original prediction performance. We conduct extensive experiments on benchmark datasets to demonstrate the effectiveness of our pruning method compared with several state-of-the-art baseline methods. Codes of our work are released at: https://github.com/csyhhu/L-OBS. MOE (Min. of Education, S’pore) Published version 2020-04-08T01:57:01Z 2020-04-08T01:57:01Z 2017 Conference Paper Dong, X., Chen, S., & Pan, S. J. (2017). Learning to prune deep neural networks via layer-wise optimal brain surgeon. Proceedings of 31st Conference on Neural Information Processing Systems (NIPS 2017). https://hdl.handle.net/10356/137659 1705.07565 en © 2017 Neural Information Processing Systems. All rights reserved. This paper was published in Proceedings of 31st Conference on Neural Information Processing Systems and is made available with permission of Neural Information Processing Systems. application/pdf
institution Nanyang Technological University
building NTU Library
country Singapore
collection DR-NTU
language English
topic Engineering::Computer science and engineering
Deep Neural Networks
Layer-wise Optimal Brain Surgeon
spellingShingle Engineering::Computer science and engineering
Deep Neural Networks
Layer-wise Optimal Brain Surgeon
Dong, Xin
Chen, Shangyu
Pan, Sinno Jialin
Learning to prune deep neural networks via layer-wise optimal brain surgeon
description How to develop slim and accurate deep neural networks has become crucial for real- world applications, especially for those employed in embedded systems. Though previous work along this research line has shown some promising results, most existing methods either fail to significantly compress a well-trained deep network or require a heavy retraining process for the pruned deep network to re-boost its prediction performance. In this paper, we propose a new layer-wise pruning method for deep neural networks. In our proposed method, parameters of each individual layer are pruned independently based on second order derivatives of a layer-wise error function with respect to the corresponding parameters. We prove that the final prediction performance drop after pruning is bounded by a linear combination of the reconstructed errors caused at each layer. By controlling layer-wise errors properly, one only needs to perform a light retraining process on the pruned network to resume its original prediction performance. We conduct extensive experiments on benchmark datasets to demonstrate the effectiveness of our pruning method compared with several state-of-the-art baseline methods. Codes of our work are released at: https://github.com/csyhhu/L-OBS.
author2 School of Computer Science and Engineering
author_facet School of Computer Science and Engineering
Dong, Xin
Chen, Shangyu
Pan, Sinno Jialin
format Conference or Workshop Item
author Dong, Xin
Chen, Shangyu
Pan, Sinno Jialin
author_sort Dong, Xin
title Learning to prune deep neural networks via layer-wise optimal brain surgeon
title_short Learning to prune deep neural networks via layer-wise optimal brain surgeon
title_full Learning to prune deep neural networks via layer-wise optimal brain surgeon
title_fullStr Learning to prune deep neural networks via layer-wise optimal brain surgeon
title_full_unstemmed Learning to prune deep neural networks via layer-wise optimal brain surgeon
title_sort learning to prune deep neural networks via layer-wise optimal brain surgeon
publishDate 2020
url https://hdl.handle.net/10356/137659
_version_ 1681058384512548864