On the origins of randomization-based feedforward neural networks

This letter identifies original independent works in the domain of randomization-based feedforward neural networks. In the most common approach, only the output layer weights require training while the hidden layer weights and biases are randomly assigned and kept fixed. The output layer weights are...

Full description

Saved in:
Bibliographic Details
Main Authors: Suganthan, Ponnuthurai Nagaratnam, Katuwal, Rakesh
Other Authors: School of Electrical and Electronic Engineering
Format: Article
Language:English
Published: 2022
Subjects:
Online Access:https://hdl.handle.net/10356/160252
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-160252
record_format dspace
spelling sg-ntu-dr.10356-1602522022-07-18T06:23:05Z On the origins of randomization-based feedforward neural networks Suganthan, Ponnuthurai Nagaratnam Katuwal, Rakesh School of Electrical and Electronic Engineering Engineering::Electrical and electronic engineering Non-Iterative Training Closed-Form Solution This letter identifies original independent works in the domain of randomization-based feedforward neural networks. In the most common approach, only the output layer weights require training while the hidden layer weights and biases are randomly assigned and kept fixed. The output layer weights are obtained using either iterative techniques or non-iterative closed-form solutions. The first such work (abbreviated as RWNN) was published in 1992 by Schmidt et al. for a single hidden layer neural network with sigmoidal activation. In 1994, a closed form solution was offered for the random vector functional link (RVFL) neural networks with direct links from the input to the output. On the other hand, for radial basis function neural networks, randomized selection of basis functions’ centers was used in 1988. Several works were published thereafter, employing similar techniques but with different names while failing to cite the original or relevant sources. In this letter, we make an attempt to identify and trace the origins of such randomization-based feedforward neural networks and give credits to the original works where due and hope that the future research publications in this field will provide fair literature review and appropriate experimental comparisons. We also briefly review the limited performance comparisons in the literature, two recently proposed new names, randomization-based multi-layer or deep neural networks and provide promising future directions. 2022-07-18T06:23:05Z 2022-07-18T06:23:05Z 2021 Journal Article Suganthan, P. N. & Katuwal, R. (2021). On the origins of randomization-based feedforward neural networks. Applied Soft Computing, 105, 107239-. https://dx.doi.org/10.1016/j.asoc.2021.107239 1568-4946 https://hdl.handle.net/10356/160252 10.1016/j.asoc.2021.107239 2-s2.0-85103286181 105 107239 en Applied Soft Computing © 2021 Elsevier B.V. All rights reserved.
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Engineering::Electrical and electronic engineering
Non-Iterative Training
Closed-Form Solution
spellingShingle Engineering::Electrical and electronic engineering
Non-Iterative Training
Closed-Form Solution
Suganthan, Ponnuthurai Nagaratnam
Katuwal, Rakesh
On the origins of randomization-based feedforward neural networks
description This letter identifies original independent works in the domain of randomization-based feedforward neural networks. In the most common approach, only the output layer weights require training while the hidden layer weights and biases are randomly assigned and kept fixed. The output layer weights are obtained using either iterative techniques or non-iterative closed-form solutions. The first such work (abbreviated as RWNN) was published in 1992 by Schmidt et al. for a single hidden layer neural network with sigmoidal activation. In 1994, a closed form solution was offered for the random vector functional link (RVFL) neural networks with direct links from the input to the output. On the other hand, for radial basis function neural networks, randomized selection of basis functions’ centers was used in 1988. Several works were published thereafter, employing similar techniques but with different names while failing to cite the original or relevant sources. In this letter, we make an attempt to identify and trace the origins of such randomization-based feedforward neural networks and give credits to the original works where due and hope that the future research publications in this field will provide fair literature review and appropriate experimental comparisons. We also briefly review the limited performance comparisons in the literature, two recently proposed new names, randomization-based multi-layer or deep neural networks and provide promising future directions.
author2 School of Electrical and Electronic Engineering
author_facet School of Electrical and Electronic Engineering
Suganthan, Ponnuthurai Nagaratnam
Katuwal, Rakesh
format Article
author Suganthan, Ponnuthurai Nagaratnam
Katuwal, Rakesh
author_sort Suganthan, Ponnuthurai Nagaratnam
title On the origins of randomization-based feedforward neural networks
title_short On the origins of randomization-based feedforward neural networks
title_full On the origins of randomization-based feedforward neural networks
title_fullStr On the origins of randomization-based feedforward neural networks
title_full_unstemmed On the origins of randomization-based feedforward neural networks
title_sort on the origins of randomization-based feedforward neural networks
publishDate 2022
url https://hdl.handle.net/10356/160252
_version_ 1738844893659267072