Implementation of Symmetric Rank-One Methods for Unconstrained Optimization
The focus of this thesis is on analyzing the theoretical and computational aspects of some quasi-Newton (QN) methods for locating a minimum of a real valued function f over all vectors x 2 Rn. In many practical applications, the Hessian of the objective function may be too expensive to calculate or...
Saved in:
Main Author: | |
---|---|
Format: | Thesis |
Language: | English English |
Published: |
2010
|
Online Access: | http://psasir.upm.edu.my/id/eprint/19590/1/FS_2010_41_F.pdf http://psasir.upm.edu.my/id/eprint/19590/ |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Universiti Putra Malaysia |
Language: | English English |
Summary: | The focus of this thesis is on analyzing the theoretical and computational aspects of some quasi-Newton (QN) methods for locating a minimum of a real valued function f over all vectors x 2 Rn. In many practical applications, the Hessian of the objective function may be too expensive to calculate or may even be unavailable in the explicit form. QN methods endeavor to circumvent the deciencies of Newtons method (while retaining the basic structure and thus preserving, as far as possible, its advantages) by constructing approximations for the Hessian iteratively. Among QN updates, symmetric rank-one (SR1) update has been
shown to be an e®ective and reliable method of such algorithms. However, SR1 is an awkward method, even though its performance is in general better than well known QN updates. The problem is that the SR1 update may not retain positive deniteness and may become undened because the denominator becomes zero.
In recent years considerable attention has been directed towards preserving and ensuring the positive deniteness of SR1 update, but improving the quality of the estimates has rarely been studied in depth.
Our purpose in this thesis is to improve the Hessian approximation updates and study the computational performance and convergence property of this update.
First, we brie°y give some mathematical background. A review of di®erent minimization methods that can be used to solve unconstrained optimization problems is also given. We consider a modification of secant equation for the SR1 update.
In this method, the Hessian approximation is updated based on modifed secant equation, which uses both gradient and function value information in order to get a higher-order accuracy in approximating the second curvature of the objec-
tive function. We then examine a new scaled memoryless SR1 method based on modied secant equation for solving large-scale unconstrained optimization problems. We prove that the new method possesses global convergence. The rate of
convergence of such algorithms are also discussed.
Due to the presence of SR1 deciencies, we introduce a restarting procedure using eigenvalue of the SR1 update. We also introduce a variety of techniques to improve Hessian approximations of the SR1 method for small to large-sized
problems, including multi-step, extra updating methods along with the structured method which uses partial information on Hessian.
Variants of SR1 update are tested numerically and compared to several other famous minimization methods. Finally, we comment on some achievement in our research. Possible extensions are also given to conclude this thesis. |
---|