Optimization Methods In Training Neural Networks
Terdapat beberapa teknik pengekstremuman bagi menyelesaikan masalah aIjabar linear dan tak linear. Kaedah Newton mempunyai sifat yang dipanggil penamatan kuadratik yang bermaksud ia meminimumkan suatu fungsi kuadratik dalam bilangan le1aran yang terhingga. Walaubagaimanapun, kaedah ini memerlukan...
Saved in:
Main Author: | |
---|---|
Format: | Thesis |
Language: | English |
Published: |
2003
|
Subjects: | |
Online Access: | http://eprints.usm.my/31158/1/SARATHA_SATHASIVAM.pdf http://eprints.usm.my/31158/ |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Universiti Sains Malaysia |
Language: | English |
id |
my.usm.eprints.31158 |
---|---|
record_format |
eprints |
spelling |
my.usm.eprints.31158 http://eprints.usm.my/31158/ Optimization Methods In Training Neural Networks Sathasivam, Saratha QA1 Mathematics (General) Terdapat beberapa teknik pengekstremuman bagi menyelesaikan masalah aIjabar linear dan tak linear. Kaedah Newton mempunyai sifat yang dipanggil penamatan kuadratik yang bermaksud ia meminimumkan suatu fungsi kuadratik dalam bilangan le1aran yang terhingga. Walaubagaimanapun, kaedah ini memerlukan pengiraan dan pengstoran terbitan kedua bagi fungsi kuadratik yang terlibat. Apabila bilangan parameter n adalah besar, ianya mungkin tidak praktikat· untuk mengira semua terbitap kedua. Hal ini adalah benar bagi rangkaian neural di mana kebanyakan aplikasi praktikal memerlukan beberapa ratus atau ribu pemberat. Bagi masalah-masalah sedemikian, kaedah pengoptimuman yang hanya memerlukan terbitan pertama tetapi masih mempunyai sifat penamatan kuadratik lebih diutamakan. There are a number of extremizing techniques to solve linear and nonlinear algebraic • problems. Newton's method has a property called quadratic termination~ which means that it minimizes a quadratic function exactly in a finite number of iterations. Unfortunately, it requires calculation and storage of the second derivatives of the quadratic function involved. When the number of parameters, n, is large, it may be impractical to compute all the second derivatives. This is especially true for neural networks, where practical applications can require several hundred to many thousands weights. Eor these particular cases, methods that require ,only first derivatives bMt still have quadratic termination are preferred. 2003-07 Thesis NonPeerReviewed application/pdf en http://eprints.usm.my/31158/1/SARATHA_SATHASIVAM.pdf Sathasivam, Saratha (2003) Optimization Methods In Training Neural Networks. Masters thesis, Universiti Sains Malaysia. |
institution |
Universiti Sains Malaysia |
building |
Hamzah Sendut Library |
collection |
Institutional Repository |
continent |
Asia |
country |
Malaysia |
content_provider |
Universiti Sains Malaysia |
content_source |
USM Institutional Repository |
url_provider |
http://eprints.usm.my/ |
language |
English |
topic |
QA1 Mathematics (General) |
spellingShingle |
QA1 Mathematics (General) Sathasivam, Saratha Optimization Methods In Training Neural Networks |
description |
Terdapat beberapa teknik pengekstremuman bagi menyelesaikan masalah aIjabar linear
dan tak linear. Kaedah Newton mempunyai sifat yang dipanggil penamatan kuadratik
yang bermaksud ia meminimumkan suatu fungsi kuadratik dalam bilangan le1aran yang
terhingga. Walaubagaimanapun, kaedah ini memerlukan pengiraan dan pengstoran
terbitan kedua bagi fungsi kuadratik yang terlibat. Apabila bilangan parameter n adalah
besar, ianya mungkin tidak praktikat· untuk mengira semua terbitap kedua. Hal ini
adalah benar bagi rangkaian neural di mana kebanyakan aplikasi praktikal memerlukan
beberapa ratus atau ribu pemberat. Bagi masalah-masalah sedemikian, kaedah
pengoptimuman yang hanya memerlukan terbitan pertama tetapi masih mempunyai sifat
penamatan kuadratik lebih diutamakan.
There are a number of extremizing techniques to solve linear and nonlinear algebraic •
problems. Newton's method has a property called quadratic termination~ which
means that it minimizes a quadratic function exactly in a finite number of iterations.
Unfortunately, it requires calculation and storage of the second derivatives of the
quadratic function involved. When the number of parameters, n, is large, it may be
impractical to compute all the second derivatives. This is especially true for neural
networks, where practical applications can require several hundred to many thousands
weights. Eor these particular cases, methods that require ,only first derivatives bMt still
have quadratic termination are preferred. |
format |
Thesis |
author |
Sathasivam, Saratha |
author_facet |
Sathasivam, Saratha |
author_sort |
Sathasivam, Saratha |
title |
Optimization Methods In Training Neural Networks |
title_short |
Optimization Methods In Training Neural Networks |
title_full |
Optimization Methods In Training Neural Networks |
title_fullStr |
Optimization Methods In Training Neural Networks |
title_full_unstemmed |
Optimization Methods In Training Neural Networks |
title_sort |
optimization methods in training neural networks |
publishDate |
2003 |
url |
http://eprints.usm.my/31158/1/SARATHA_SATHASIVAM.pdf http://eprints.usm.my/31158/ |
_version_ |
1643707316460060672 |