Comparison of the Model Selection Criteria for Multiple Regression Based on Kullback-Leibler’s Information

This paper presents the derivations to unify the justifications of the criteria based on Kullback’s divergence; AIC, AICc, KIC, KICcC, KICcSB, and KICcHM. The results show that KICcC has the strongest penalty function under some condition, followed, respectively, by KICcSB, KICcHM, KIC and AIC. Also...

Full description

Saved in:
Bibliographic Details
Main Authors: Warangkhana Keerativibool, Pachitjanut Siripanich
Format: บทความวารสาร
Language:English
Published: Science Faculty of Chiang Mai University 2019
Online Access:http://it.science.cmu.ac.th/ejournal/dl.php?journal_id=8044
http://cmuir.cmu.ac.th/jspui/handle/6653943832/63897
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Chiang Mai University
Language: English
id th-cmuir.6653943832-63897
record_format dspace
spelling th-cmuir.6653943832-638972019-05-07T09:59:37Z Comparison of the Model Selection Criteria for Multiple Regression Based on Kullback-Leibler’s Information Warangkhana Keerativibool Pachitjanut Siripanich This paper presents the derivations to unify the justifications of the criteria based on Kullback’s divergence; AIC, AICc, KIC, KICcC, KICcSB, and KICcHM. The results show that KICcC has the strongest penalty function under some condition, followed, respectively, by KICcSB, KICcHM, KIC and AIC. Also, KIC is greater than AICc under some condition, but AICc always greater than AIC. The performances of all model selection criteria are examined by the extensive simulation study. It can be concluded that, the model selection with a larger penalty term may lead to underfitting and slow convergence while a smaller penalty term may lead to overfitting and inconsistency. When the sample size is small to moderate and the true model is somewhat difficult to identify, the performances of AIC and AICc are better than others. However, they can identify the true model actually less accurate. When the sample size is large, the performances of all model selection criteria are insignificant difference, but all criteria can identify the true model still less accurate. As a result, we used the observed efficiency to assess model selection criteria performances. On the average, this measure suggests that in a weakly identifiable true model, whether the sample size is small or large, KICcC is the best criterion. For the small sample size and the true model can be specified more easily with small error variance, every model selection criteria still have the ability to select the correct model. If the error variance increase, the performances of all model selection criteria are bad. When the sample sizes are moderate to large, KICc performs the best, it can identify a lot of true model for small error variance. But, if the error variance increases and the sample size is not large enough, all model selection criteria can identify a little true model. 2019-05-07T09:59:37Z 2019-05-07T09:59:37Z 2017 บทความวารสาร 0125-2526 http://it.science.cmu.ac.th/ejournal/dl.php?journal_id=8044 http://cmuir.cmu.ac.th/jspui/handle/6653943832/63897 Eng Science Faculty of Chiang Mai University
institution Chiang Mai University
building Chiang Mai University Library
country Thailand
collection CMU Intellectual Repository
language English
description This paper presents the derivations to unify the justifications of the criteria based on Kullback’s divergence; AIC, AICc, KIC, KICcC, KICcSB, and KICcHM. The results show that KICcC has the strongest penalty function under some condition, followed, respectively, by KICcSB, KICcHM, KIC and AIC. Also, KIC is greater than AICc under some condition, but AICc always greater than AIC. The performances of all model selection criteria are examined by the extensive simulation study. It can be concluded that, the model selection with a larger penalty term may lead to underfitting and slow convergence while a smaller penalty term may lead to overfitting and inconsistency. When the sample size is small to moderate and the true model is somewhat difficult to identify, the performances of AIC and AICc are better than others. However, they can identify the true model actually less accurate. When the sample size is large, the performances of all model selection criteria are insignificant difference, but all criteria can identify the true model still less accurate. As a result, we used the observed efficiency to assess model selection criteria performances. On the average, this measure suggests that in a weakly identifiable true model, whether the sample size is small or large, KICcC is the best criterion. For the small sample size and the true model can be specified more easily with small error variance, every model selection criteria still have the ability to select the correct model. If the error variance increase, the performances of all model selection criteria are bad. When the sample sizes are moderate to large, KICc performs the best, it can identify a lot of true model for small error variance. But, if the error variance increases and the sample size is not large enough, all model selection criteria can identify a little true model.
format บทความวารสาร
author Warangkhana Keerativibool
Pachitjanut Siripanich
spellingShingle Warangkhana Keerativibool
Pachitjanut Siripanich
Comparison of the Model Selection Criteria for Multiple Regression Based on Kullback-Leibler’s Information
author_facet Warangkhana Keerativibool
Pachitjanut Siripanich
author_sort Warangkhana Keerativibool
title Comparison of the Model Selection Criteria for Multiple Regression Based on Kullback-Leibler’s Information
title_short Comparison of the Model Selection Criteria for Multiple Regression Based on Kullback-Leibler’s Information
title_full Comparison of the Model Selection Criteria for Multiple Regression Based on Kullback-Leibler’s Information
title_fullStr Comparison of the Model Selection Criteria for Multiple Regression Based on Kullback-Leibler’s Information
title_full_unstemmed Comparison of the Model Selection Criteria for Multiple Regression Based on Kullback-Leibler’s Information
title_sort comparison of the model selection criteria for multiple regression based on kullback-leibler’s information
publisher Science Faculty of Chiang Mai University
publishDate 2019
url http://it.science.cmu.ac.th/ejournal/dl.php?journal_id=8044
http://cmuir.cmu.ac.th/jspui/handle/6653943832/63897
_version_ 1681425980796698624