Layer-wise learning framework for deep networks

With the increasingly extensive application of deep learning, research based on experiential results has made significant progress in the field of machine learning over the past few years. However, deep learning is very difficult to understand due to its use of artificial neural networks and a black...

Full description

Saved in:
Bibliographic Details
Main Author: Yu, Haoyao
Other Authors: Cheah Chien Chern
Format: Thesis-Master by Coursework
Language:English
Published: Nanyang Technological University 2024
Subjects:
Online Access:https://hdl.handle.net/10356/179107
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-179107
record_format dspace
spelling sg-ntu-dr.10356-1791072024-07-19T15:44:00Z Layer-wise learning framework for deep networks Yu, Haoyao Cheah Chien Chern School of Electrical and Electronic Engineering ECCCheah@ntu.edu.sg Engineering Layer-wise learning With the increasingly extensive application of deep learning, research based on experiential results has made significant progress in the field of machine learning over the past few years. However, deep learning is very difficult to understand due to its use of artificial neural networks and a black-box approach. A lack of knowledge about deep learning networks will hinder their development in situations where taking large risks is necessary, as well as restrict their use in situations where robust, dependable artificial intelligence is desired. The aim of this dissertation is to use the stochastic gradient descent method, but hierarchically, to train deep residual neural networks for better understanding. Firstly, it is of great importance to construct a mathematical model for deep residual neural networks based on matrix forms and then hierarchically train and test the deep residual neural networks on a few common image datasets employing the stochastic gradient descent technique. The case examples demonstrate a rational compromise regarding layer-wise trainability and precision while validating the applicability of the proposed layer-wise learning method to determine the optimal number of layers for real-world scenarios. Master's degree 2024-07-18T05:54:56Z 2024-07-18T05:54:56Z 2024 Thesis-Master by Coursework Yu, H. (2024). Layer-wise learning framework for deep networks. Master's thesis, Nanyang Technological University, Singapore. https://hdl.handle.net/10356/179107 https://hdl.handle.net/10356/179107 en application/pdf Nanyang Technological University
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Engineering
Layer-wise learning
spellingShingle Engineering
Layer-wise learning
Yu, Haoyao
Layer-wise learning framework for deep networks
description With the increasingly extensive application of deep learning, research based on experiential results has made significant progress in the field of machine learning over the past few years. However, deep learning is very difficult to understand due to its use of artificial neural networks and a black-box approach. A lack of knowledge about deep learning networks will hinder their development in situations where taking large risks is necessary, as well as restrict their use in situations where robust, dependable artificial intelligence is desired. The aim of this dissertation is to use the stochastic gradient descent method, but hierarchically, to train deep residual neural networks for better understanding. Firstly, it is of great importance to construct a mathematical model for deep residual neural networks based on matrix forms and then hierarchically train and test the deep residual neural networks on a few common image datasets employing the stochastic gradient descent technique. The case examples demonstrate a rational compromise regarding layer-wise trainability and precision while validating the applicability of the proposed layer-wise learning method to determine the optimal number of layers for real-world scenarios.
author2 Cheah Chien Chern
author_facet Cheah Chien Chern
Yu, Haoyao
format Thesis-Master by Coursework
author Yu, Haoyao
author_sort Yu, Haoyao
title Layer-wise learning framework for deep networks
title_short Layer-wise learning framework for deep networks
title_full Layer-wise learning framework for deep networks
title_fullStr Layer-wise learning framework for deep networks
title_full_unstemmed Layer-wise learning framework for deep networks
title_sort layer-wise learning framework for deep networks
publisher Nanyang Technological University
publishDate 2024
url https://hdl.handle.net/10356/179107
_version_ 1814047261891493888