Extreme learning machine with affine transformation inputs in an activation function
The extreme learning machine (ELM) has attracted much attention over the past decade due to its fast learning speed and convincing generalization performance. However, there still remains a practical issue to be approached when applying the ELM: the randomly generated hidden node parameters without...
Saved in:
Main Authors: | , , , , , |
---|---|
Other Authors: | |
Format: | Article |
Language: | English |
Published: |
2020
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/136684 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
id |
sg-ntu-dr.10356-136684 |
---|---|
record_format |
dspace |
spelling |
sg-ntu-dr.10356-1366842020-01-10T04:38:08Z Extreme learning machine with affine transformation inputs in an activation function Cao, Jiuwen Zhang, Kai Yong, Hongwei Lai, Xiaoping Chen, Badong Lin, Zhiping School of Electrical and Electronic Engineering Engineering::Electrical and electronic engineering Engineering::Computer science and engineering Extreme Learning Machine Affine Transformation Activation Function The extreme learning machine (ELM) has attracted much attention over the past decade due to its fast learning speed and convincing generalization performance. However, there still remains a practical issue to be approached when applying the ELM: the randomly generated hidden node parameters without tuning can lead to the hidden node outputs being nonuniformly distributed, thus giving rise to poor generalization performance. To address this deficiency, a novel activation function with an affine transformation (AT) on its input is introduced into the ELM, which leads to an improved ELM algorithm that is referred to as an AT-ELM in this paper. The scaling and translation parameters of the AT activation function are computed based on the maximum entropy principle in such a way that the hidden layer outputs approximately obey a uniform distribution. Application of the AT-ELM algorithm in nonlinear function regression shows its robustness to the range scaling of the network inputs. Experiments on nonlinear function regression, real-world data set classification, and benchmark image recognition demonstrate better performance for the AT-ELM compared with the original ELM, the regularized ELM, and the kernel ELM. Recognition results on benchmark image data sets also reveal that the AT-ELM outperforms several other state-of-the-art algorithms in general. Accepted version 2020-01-10T04:38:08Z 2020-01-10T04:38:08Z 2018 Journal Article Cao, J., Zhang, K., Yong, H., Lai, X., Chen, B., & Lin, Z. (2019). IEEE Transactions on Neural Networks and Learning Systems, 30(7), 2093-2107. doi:10.1109/TNNLS.2018.2877468 2162-237X https://hdl.handle.net/10356/136684 10.1109/TNNLS.2018.2877468 30442621 2-s2.0-85056603450 7 30 2093 2107 en IEEE Transactions on Neural Networks and Learning Systems © 2018 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. The published version is available at: https://doi.org/10.1109/TNNLS.2018.2877468 application/pdf |
institution |
Nanyang Technological University |
building |
NTU Library |
country |
Singapore |
collection |
DR-NTU |
language |
English |
topic |
Engineering::Electrical and electronic engineering Engineering::Computer science and engineering Extreme Learning Machine Affine Transformation Activation Function |
spellingShingle |
Engineering::Electrical and electronic engineering Engineering::Computer science and engineering Extreme Learning Machine Affine Transformation Activation Function Cao, Jiuwen Zhang, Kai Yong, Hongwei Lai, Xiaoping Chen, Badong Lin, Zhiping Extreme learning machine with affine transformation inputs in an activation function |
description |
The extreme learning machine (ELM) has attracted much attention over the past decade due to its fast learning speed and convincing generalization performance. However, there still remains a practical issue to be approached when applying the ELM: the randomly generated hidden node parameters without tuning can lead to the hidden node outputs being nonuniformly distributed, thus giving rise to poor generalization performance. To address this deficiency, a novel activation function with an affine transformation (AT) on its input is introduced into the ELM, which leads to an improved ELM algorithm that is referred to as an AT-ELM in this paper. The scaling and translation parameters of the AT activation function are computed based on the maximum entropy principle in such a way that the hidden layer outputs approximately obey a uniform distribution. Application of the AT-ELM algorithm in nonlinear function regression shows its robustness to the range scaling of the network inputs. Experiments on nonlinear function regression, real-world data set classification, and benchmark image recognition demonstrate better performance for the AT-ELM compared with the original ELM, the regularized ELM, and the kernel ELM. Recognition results on benchmark image data sets also reveal that the AT-ELM outperforms several other state-of-the-art algorithms in general. |
author2 |
School of Electrical and Electronic Engineering |
author_facet |
School of Electrical and Electronic Engineering Cao, Jiuwen Zhang, Kai Yong, Hongwei Lai, Xiaoping Chen, Badong Lin, Zhiping |
format |
Article |
author |
Cao, Jiuwen Zhang, Kai Yong, Hongwei Lai, Xiaoping Chen, Badong Lin, Zhiping |
author_sort |
Cao, Jiuwen |
title |
Extreme learning machine with affine transformation inputs in an activation function |
title_short |
Extreme learning machine with affine transformation inputs in an activation function |
title_full |
Extreme learning machine with affine transformation inputs in an activation function |
title_fullStr |
Extreme learning machine with affine transformation inputs in an activation function |
title_full_unstemmed |
Extreme learning machine with affine transformation inputs in an activation function |
title_sort |
extreme learning machine with affine transformation inputs in an activation function |
publishDate |
2020 |
url |
https://hdl.handle.net/10356/136684 |
_version_ |
1681039692233965568 |