Incremental extreme learning machine

This new theory shows that in order to let SLFNs work as universal approximators, one may simply randomly choose input-to-hidden nodes, and then we only need to adjust the output weights linking the hidden layer and the output layer. In such SLFNs implementations, the activation functions for additi...

全面介紹

Saved in:
書目詳細資料
主要作者: Chen, Lei
其他作者: Huang Guangbin
格式: Theses and Dissertations
出版: 2008
主題:
在線閱讀:https://hdl.handle.net/10356/3804
標簽: 添加標簽
沒有標簽, 成為第一個標記此記錄!
機構: Nanyang Technological University
實物特徵
總結:This new theory shows that in order to let SLFNs work as universal approximators, one may simply randomly choose input-to-hidden nodes, and then we only need to adjust the output weights linking the hidden layer and the output layer. In such SLFNs implementations, the activation functions for additive nodes can be any bounded nonconstant piecewise continuous functions or the activation functions for RBF nodes can be any integrable piecewise continuous functions.We propose two incremental algorithms:1) Incremental extreme learning machine (I-ELM) 2) Convex I-ELM (CI-ELM).