Building highly efficient neural networks through weight pruning
Abstract: Neural network pruning—the task of reducing the size of a neural network architecture by removing neurons/connections (or links) in the networks—has been one of the main focuses of a great deal of work in recent years. Neural network pruning reduces the size of the neural network by removi...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Final Year Project |
Language: | English |
Published: |
Nanyang Technological University
2022
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/157768 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | Abstract: Neural network pruning—the task of reducing the size of a neural network architecture by removing neurons/connections (or links) in the networks—has been one of the main focuses of a great deal of work in recent years. Neural network pruning reduces the size of the neural network by removing links and neurons from the neural network using a certain set of criteria. This is beneficial to help save on the cost from the creation of large corporate neural networks and at the same time without compromising too much in performance accuracy and generalisation ability of the networks. This project reports the experimental study on benchmark datasets using various neural networks |
---|