Comparing scalability of RVFL network
Traditionally, random vector functional link (RVFL) is a randomization based neural networks has been gaining significant traction as it is able to overcome the shortcomings of conventional models. It has been successfully applied to a diverse range of tasks such as classification, regression, vi...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Final Year Project |
Language: | English |
Published: |
Nanyang Technological University
2022
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/163582 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
id |
sg-ntu-dr.10356-163582 |
---|---|
record_format |
dspace |
spelling |
sg-ntu-dr.10356-1635822023-07-07T18:57:17Z Comparing scalability of RVFL network Yeo, Chester Jie Sheng Jiang Xudong School of Electrical and Electronic Engineering EXDJiang@ntu.edu.sg Engineering::Electrical and electronic engineering Traditionally, random vector functional link (RVFL) is a randomization based neural networks has been gaining significant traction as it is able to overcome the shortcomings of conventional models. It has been successfully applied to a diverse range of tasks such as classification, regression, visual tracking, and forecasting. Randomization based neural network employs a closed form solution to optimize parameters, which also means it only needs to train once quickly by feeding all samples to the model together, unlike back-propagation trained neural networks that require multiple iterations RVFL, is a typical representative with a single hidden layer with universal approximate ability. With weights and biases randomly generated, its uniqueness lies with the direct link that connects information between the input and output layer. This approach does not work when the size of the training dataset is huge. This project will evaluate three approaches to manage this problem: iterative learning, online learning, and vector quantization. Through the proposed methods, we hope to solve the issue of scalability in RVFL. The experimental results shows that conventional least squares classifier is the best way to solve this problem and highlights that scalability is not a strong suit of RVFL, with vector quantization being the closest performer and area of further research for work with RVFL. Bachelor of Engineering (Electrical and Electronic Engineering) 2022-12-12T03:11:25Z 2022-12-12T03:11:25Z 2022 Final Year Project (FYP) Yeo, C. J. S. (2022). Comparing scalability of RVFL network. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/163582 https://hdl.handle.net/10356/163582 en A1225-212 application/pdf Nanyang Technological University |
institution |
Nanyang Technological University |
building |
NTU Library |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
NTU Library |
collection |
DR-NTU |
language |
English |
topic |
Engineering::Electrical and electronic engineering |
spellingShingle |
Engineering::Electrical and electronic engineering Yeo, Chester Jie Sheng Comparing scalability of RVFL network |
description |
Traditionally, random vector functional link (RVFL) is a randomization based neural
networks has been gaining significant traction as it is able to overcome the shortcomings
of conventional models. It has been successfully applied to a diverse range of tasks such
as classification, regression, visual tracking, and forecasting. Randomization based
neural network employs a closed form solution to optimize parameters, which also means
it only needs to train once quickly by feeding all samples to the model together, unlike
back-propagation trained neural networks that require multiple iterations RVFL, is a
typical representative with a single hidden layer with universal approximate ability. With
weights and biases randomly generated, its uniqueness lies with the direct link that
connects information between the input and output layer.
This approach does not work when the size of the training dataset is huge. This
project will evaluate three approaches to manage this problem: iterative learning, online
learning, and vector quantization. Through the proposed methods, we hope to solve the
issue of scalability in RVFL. The experimental results shows that conventional least
squares classifier is the best way to solve this problem and highlights that scalability is
not a strong suit of RVFL, with vector quantization being the closest performer and area
of further research for work with RVFL. |
author2 |
Jiang Xudong |
author_facet |
Jiang Xudong Yeo, Chester Jie Sheng |
format |
Final Year Project |
author |
Yeo, Chester Jie Sheng |
author_sort |
Yeo, Chester Jie Sheng |
title |
Comparing scalability of RVFL network |
title_short |
Comparing scalability of RVFL network |
title_full |
Comparing scalability of RVFL network |
title_fullStr |
Comparing scalability of RVFL network |
title_full_unstemmed |
Comparing scalability of RVFL network |
title_sort |
comparing scalability of rvfl network |
publisher |
Nanyang Technological University |
publishDate |
2022 |
url |
https://hdl.handle.net/10356/163582 |
_version_ |
1772828195734683648 |