On the origins of randomization-based feedforward neural networks

This letter identifies original independent works in the domain of randomization-based feedforward neural networks. In the most common approach, only the output layer weights require training while the hidden layer weights and biases are randomly assigned and kept fixed. The output layer weights are...

Full description

Saved in:
Bibliographic Details
Main Authors: Suganthan, Ponnuthurai Nagaratnam, Katuwal, Rakesh
Other Authors: School of Electrical and Electronic Engineering
Format: Article
Language:English
Published: 2022
Subjects:
Online Access:https://hdl.handle.net/10356/160252
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:This letter identifies original independent works in the domain of randomization-based feedforward neural networks. In the most common approach, only the output layer weights require training while the hidden layer weights and biases are randomly assigned and kept fixed. The output layer weights are obtained using either iterative techniques or non-iterative closed-form solutions. The first such work (abbreviated as RWNN) was published in 1992 by Schmidt et al. for a single hidden layer neural network with sigmoidal activation. In 1994, a closed form solution was offered for the random vector functional link (RVFL) neural networks with direct links from the input to the output. On the other hand, for radial basis function neural networks, randomized selection of basis functions’ centers was used in 1988. Several works were published thereafter, employing similar techniques but with different names while failing to cite the original or relevant sources. In this letter, we make an attempt to identify and trace the origins of such randomization-based feedforward neural networks and give credits to the original works where due and hope that the future research publications in this field will provide fair literature review and appropriate experimental comparisons. We also briefly review the limited performance comparisons in the literature, two recently proposed new names, randomization-based multi-layer or deep neural networks and provide promising future directions.