Testing intergroup concordance in ranking experiments with two groups of judges

Across many areas of psychology, concordance is commonly used to measure the (intragroup) agreement in ranking a number of items by a group of judges. Sometimes, however, the judges come from multiple groups, and in those situations, the interest is to measure the concordance between groups, under t...

全面介紹

Saved in:
書目詳細資料
Main Authors: DEKLE, Dawn J., LEUNG, Denis H. Y., ZHU, Min
格式: text
語言:English
出版: Institutional Knowledge at Singapore Management University 2008
主題:
在線閱讀:https://ink.library.smu.edu.sg/soe_research/1949
https://ink.library.smu.edu.sg/context/soe_research/article/2948/viewcontent/TestingIntergroupConcordanceJudges_2008.pdf
標簽: 添加標簽
沒有標簽, 成為第一個標記此記錄!
實物特徵
總結:Across many areas of psychology, concordance is commonly used to measure the (intragroup) agreement in ranking a number of items by a group of judges. Sometimes, however, the judges come from multiple groups, and in those situations, the interest is to measure the concordance between groups, under the assumption that there is some within-group concordance. In this investigation, existing methods are compared under a variety of scenarios. Permutation theory is used to calculate the error rates and the power of the methods. Missing data situations are also studied. The results indicate that the performance of the methods depend on (a) the number of items to be ranked, (b) the level of within-group agreement, and (c) the level of between-group agreement. Overall, using the actual ranks of the items gives better results than using the pairwise comparison of rankings. Missing data lead to loss in statistical power, and in some cases, the loss is substantial. The degree of power loss depends on the missing mechanism and the method of imputing the missing data, among other factors.