Human learning principles inspired particle swarm optimization algorithms
These days, the nature of global optimization problems, especially for engineering systems has become extremely complex. For these types of problems, nature inspired search based algorithms are providing much better solutions compared with other classical optimization methods. Among them, the Par...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Theses and Dissertations |
Language: | English |
Published: |
2017
|
Subjects: | |
Online Access: | http://hdl.handle.net/10356/72198 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | These days, the nature of global optimization problems, especially for engineering systems
has become extremely complex. For these types of problems, nature inspired search
based algorithms are providing much better solutions compared with other classical optimization
methods. Among them, the Particle Swarm Optimization (PSO) algorithm has
been mostly preferred due to its simplicity and ability to provide better solutions. PSO
algorithm simulates the social behaviour of a bird swarm in search of food where the
birds are modelled as particles. The limitations associated with PSO have been extensively
studied and different modifications, variations and refinements to PSO have been
proposed in the literature for enhancing its performance. The idea of utilizing intelligent
swarms motivated towards exploring human cognitive learning principles for PSO. As discussed
in learning psychology, human beings are known to be intelligent and have good
social cognizance. Therefore, any optimization technique employing human-like learning
strategies should prove to be more effective. This thesis addresses the use of human
learning principles inspired strategies for the PSO algorithm. The major contributions
of the thesis are:
• Self-Regulating Particle Swarm Optimization (SRPSO) algorithm.
• Dynamic Mentoring and Self-Regulation based Particle Swarm Optimization (DMeSRPSO)
algorithm.
• Directionally Driven Self-Regulating Particle Swarm Optimization (DD-SRPSO)
algorithm.
• Incorporation of a constraint handling mechanism in the structure of the DDSRPSO
algorithm.
The Self-Regulating Particle Swarm Optimization (SRPSO) algorithm is inspired
from the human self-learning principles. SRPSO utilizes self-regulation and self-perception
based learning strategies to achieve an enhanced exploration and a better exploitation.
The self-regulated inertia weights are employed only for the best particle whereas all
the other particles perform search employing self-perception of the global best search direction. The perception is dynamically changed in every iteration for intelligent exploitation.
The effect of human learning strategies on the particles has been studied
using CEC2005 benchmark problems and the performance has been compared with the
state-of-the-art PSO variants. The results clearly indicate that SRPSO converge faster
closer to the global optimum with a 95% confidence level.
Further, human beings utilize multiple information processing strategies during the
learning process and collaborate with each other for better decision making. Integration
of socially shared information processing will further enhance the performance. Therefore,
a new algorithm referred to as Dynamic Mentoring and Self-Regulation based Particle
Swarm Optimization (DMeSR-PSO) algorithm has been proposed incorporating the
concept of mentoring together with the self-regulation. Here, the particles are divided into
three groups consisting of mentors, mentees and independent learners. The elite particles
are grouped as mentors to guide the poorly performing particles of the mentees group.
The independent learners perform search using self-perception based learning strategy of
the SRPSO algorithm. Tested on both the unimodal and multimodal CEC2005 benchmark
problems the DMeSR-PSO has shown improved convergence than the SRPSO algorithm.
Further, the robustness of the algorithm has been tested on CEC2013 problems
and eight real-world optimization problems from CEC2011. The results indicate that
DMeSR-PSO is significantly better than other PSO variants and other population based
optimization algorithms with a 95% confidence level, yielding an effective optimization
algorithm for real-world applications.
Both SRPSO and DMeSR-PSO are rotationally variant algorithms and therefore the
performances have not been significant on the rotated problems. To overcome this, a directionally
updated and rotationally invariant SRPSO algorithm has also been developed
named as Directionally Driven SRPSO (DD-SRPSO) algorithm. Here, the poorly performing
particles are equipped with complete social perception guidance. Other particles
are randomly selected to perform search either by using self-perception based learning
strategy of SRPSO or by applying a rotation invariant strategy. The performance
of DD-SRPSO tested on rotated problems from CEC2013 proves that DD-SRPSO is
significantly better than SRPSO. Its performance, compared with other algorithms on
CEC2013 benchmark problems clearly indicates that DD-SRPSO is significantly better
than selected algorithms on a wide range of problems.
Further, a new constraint handling mechanism has been incorporated in the DDSRPSO
structure referred to as DD-SRPSO with constraint handing mechanism (DDSRPSOCHM).
Next, the application of DD-SRPSO-CHM in optimizing multi-stage launch vehicle
configuration has been studied. In a multi-stage launch vehicle configuration, the
multiple objectives are converted into a single objective with constraints and these are
efficiently handled by DD-SRPSO-CHM. Comparative analysis on the problem suggests
that DD-SRPSO is converging faster towards the solution.
By incorporating human-like behaviour in the PSO algorithm, the developed variants
have shown a faster convergence closer to the optima over a diverse set of problems indicating that the algorithms are potential choice for complex real-world applications.
In the future, the algorithm will be extended for solving multi-objective optimization
problems. The equality constraint handling mechanism has already been implemented in
the DD-SRPSO algorithm which can be further extended for the inequality constraints.
Furthermore, more human learning strategies can be explored for performance enhancement. |
---|