Optimization based extreme learning machine : applications and data-driven extensions

Artificial neural network, or commonly referred to as ''neural network'', is a successful example of how human nature has led technology. However, traditional learning algorithms in neural network require iterative parameter tuning and often suffer from problems like local minimu...

Full description

Saved in:
Bibliographic Details
Main Author: Zong, Weiwei
Other Authors: Huang Guangbin
Format: Theses and Dissertations
Language:English
Published: 2013
Subjects:
Online Access:https://hdl.handle.net/10356/54858
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:Artificial neural network, or commonly referred to as ''neural network'', is a successful example of how human nature has led technology. However, traditional learning algorithms in neural network require iterative parameter tuning and often suffer from problems like local minimum and slow convergence. Extreme learning machine (ELM) is able to overcome all the problems above. Proposed as a learning algorithm for the single-hidden layer feedforward neural networks (SLFNs), ELM was later extended to the ''generalized'' SLFNs where the hidden nodes might take wide types of forms not limited to neuron type. The main feature of ELM lies in the random hidden nodes. Moreover, the universal approximation theorem of ELM has guaranteed good performance as long as the hidden layer mapping is any bounded piecewise continuous function. Researchers on ELM have been seeking for some other methods to improve the generalization performance. Standard optimization method was thus considered in the realization of ELM. Not only better performance in classification was achieved, but also a fact was revealed that ELM and SVM are actually consistent from optimization point of view. The resultant ELM classifier based on standard optimization method was found with comparable performance as SVM. What's more, the implementation of ELM is much easier since the performance is insensitive to parameters. Afterwards, ELM was further analyzed from optimization point of view and solution of kernel version was derived. So far the unified framework of ELM has been formed that includes traditional neural networks, support vector networks, and regularized networks. Since the ELM theory is only developed since very recent years, there are plenty of places ELM can be applied. In this thesis, works of ELM successfully applied in real world applications, such as face recognition system and relevance ranking for information. In real world applications, the natural data is with different characteristics. For example, situations when data is not available at once or data is of large scale often arise. In this case, online sequential learning model of a machine learning technique is generally regarded as one typical solution. In this thesis, online sequential model based on ELM framework is provided so that not only all the advantages of ELM over other machine learning techniques are pertained but also the fore mentioned problems are solved. Another situation happens quite often is that the training data is not well balanced. Any normal machine learning technique that assumes well balanced data distribution is supposed with the tendency to bias the performance. In this case, weighted version of ELM is proposed as the most straightforward and efficient method to tackle such problem.