Classification for breast cancer diagnosis using adaboost
Boosting is a general method that can be applied on any learning algorithm to improve its performance. Throughout the evolution of boosting-based algorithms, the term “weak leaner” has always been mentioned. Literally it refers to weak learning algorithms that perform just slightly better than ra...
Saved in:
Main Authors: | , |
---|---|
Format: | Book Section |
Published: |
Penerbit UTM
2007
|
Subjects: | |
Online Access: | http://eprints.utm.my/id/eprint/13414/ |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Universiti Teknologi Malaysia |
Summary: | Boosting is a general method that can be applied on any learning algorithm to improve its performance. Throughout the evolution of boosting-based algorithms, the term “weak leaner” has always been mentioned. Literally it refers to weak learning algorithms that perform just slightly better than random guess. Schapire R.E. (1990) showed that these so-called weak learners can be efficiently combined or “boosted” to build a strong accurate classifier. This boosting algorithm applies weak learning algorithms multiple times to instance space with different distribution, and finally construct a strong hypothesis from numerous weak hypotheses Freund, Y. et al., (1997) first introduced theoretically the adaptive boosting (AdaBoost) method which significantly reduces the error of any learning algorithm that consistently generates classifiers with the condition of that: “better than random guess”. In AdaBoost algorithms, distribution over instance space of training set are adjusted adaptively to the errors of weak hypotheses. This helps to move the weak learner towards the “harder” part of classification space more efficiently. |
---|