Comparing scalability of RVFL network

Traditionally, random vector functional link (RVFL) is a randomization based neural networks has been gaining significant traction as it is able to overcome the shortcomings of conventional models. It has been successfully applied to a diverse range of tasks such as classification, regression, vi...

Full description

Saved in:
Bibliographic Details
Main Author: Yeo, Chester Jie Sheng
Other Authors: Jiang Xudong
Format: Final Year Project
Language:English
Published: Nanyang Technological University 2022
Subjects:
Online Access:https://hdl.handle.net/10356/163582
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:Traditionally, random vector functional link (RVFL) is a randomization based neural networks has been gaining significant traction as it is able to overcome the shortcomings of conventional models. It has been successfully applied to a diverse range of tasks such as classification, regression, visual tracking, and forecasting. Randomization based neural network employs a closed form solution to optimize parameters, which also means it only needs to train once quickly by feeding all samples to the model together, unlike back-propagation trained neural networks that require multiple iterations RVFL, is a typical representative with a single hidden layer with universal approximate ability. With weights and biases randomly generated, its uniqueness lies with the direct link that connects information between the input and output layer. This approach does not work when the size of the training dataset is huge. This project will evaluate three approaches to manage this problem: iterative learning, online learning, and vector quantization. Through the proposed methods, we hope to solve the issue of scalability in RVFL. The experimental results shows that conventional least squares classifier is the best way to solve this problem and highlights that scalability is not a strong suit of RVFL, with vector quantization being the closest performer and area of further research for work with RVFL.