Building highly efficient neural networks through weight pruning

Abstract: Neural network pruning—the task of reducing the size of a neural network architecture by removing neurons/connections (or links) in the networks—has been one of the main focuses of a great deal of work in recent years. Neural network pruning reduces the size of the neural network by removi...

Full description

Saved in:
Bibliographic Details
Main Author: Low, Xuan Hui
Other Authors: Lihui Chen
Format: Final Year Project
Language:English
Published: Nanyang Technological University 2022
Subjects:
Online Access:https://hdl.handle.net/10356/157768
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-157768
record_format dspace
spelling sg-ntu-dr.10356-1577682023-07-07T19:06:09Z Building highly efficient neural networks through weight pruning Low, Xuan Hui Lihui Chen School of Electrical and Electronic Engineering A*STAR Institute for Infocomm Research (I2R) Manas Gupta ELHCHEN@ntu.edu.sg Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence Abstract: Neural network pruning—the task of reducing the size of a neural network architecture by removing neurons/connections (or links) in the networks—has been one of the main focuses of a great deal of work in recent years. Neural network pruning reduces the size of the neural network by removing links and neurons from the neural network using a certain set of criteria. This is beneficial to help save on the cost from the creation of large corporate neural networks and at the same time without compromising too much in performance accuracy and generalisation ability of the networks. This project reports the experimental study on benchmark datasets using various neural networks Bachelor of Engineering (Electrical and Electronic Engineering) 2022-05-23T04:49:06Z 2022-05-23T04:49:06Z 2022 Final Year Project (FYP) Low, X. H. (2022). Building highly efficient neural networks through weight pruning. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/157768 https://hdl.handle.net/10356/157768 en application/pdf Nanyang Technological University
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence
spellingShingle Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence
Low, Xuan Hui
Building highly efficient neural networks through weight pruning
description Abstract: Neural network pruning—the task of reducing the size of a neural network architecture by removing neurons/connections (or links) in the networks—has been one of the main focuses of a great deal of work in recent years. Neural network pruning reduces the size of the neural network by removing links and neurons from the neural network using a certain set of criteria. This is beneficial to help save on the cost from the creation of large corporate neural networks and at the same time without compromising too much in performance accuracy and generalisation ability of the networks. This project reports the experimental study on benchmark datasets using various neural networks
author2 Lihui Chen
author_facet Lihui Chen
Low, Xuan Hui
format Final Year Project
author Low, Xuan Hui
author_sort Low, Xuan Hui
title Building highly efficient neural networks through weight pruning
title_short Building highly efficient neural networks through weight pruning
title_full Building highly efficient neural networks through weight pruning
title_fullStr Building highly efficient neural networks through weight pruning
title_full_unstemmed Building highly efficient neural networks through weight pruning
title_sort building highly efficient neural networks through weight pruning
publisher Nanyang Technological University
publishDate 2022
url https://hdl.handle.net/10356/157768
_version_ 1772827691802689536