Testing intergroup concordance in ranking experiments with two groups of judges

Across many areas of psychology, concordance is commonly used to measure the (intragroup) agreement in ranking a number of items by a group of judges. Sometimes, however, the judges come from multiple groups, and in those situations, the interest is to measure the concordance between groups, under t...

Full description

Saved in:
Bibliographic Details
Main Authors: DEKLE, Dawn J., LEUNG, Denis H. Y., ZHU, Min
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2008
Subjects:
Online Access:https://ink.library.smu.edu.sg/soe_research/1949
https://ink.library.smu.edu.sg/context/soe_research/article/2948/viewcontent/TestingIntergroupConcordanceJudges_2008.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
id sg-smu-ink.soe_research-2948
record_format dspace
spelling sg-smu-ink.soe_research-29482019-06-16T09:34:10Z Testing intergroup concordance in ranking experiments with two groups of judges DEKLE, Dawn J. LEUNG, Denis H. Y., ZHU, Min Across many areas of psychology, concordance is commonly used to measure the (intragroup) agreement in ranking a number of items by a group of judges. Sometimes, however, the judges come from multiple groups, and in those situations, the interest is to measure the concordance between groups, under the assumption that there is some within-group concordance. In this investigation, existing methods are compared under a variety of scenarios. Permutation theory is used to calculate the error rates and the power of the methods. Missing data situations are also studied. The results indicate that the performance of the methods depend on (a) the number of items to be ranked, (b) the level of within-group agreement, and (c) the level of between-group agreement. Overall, using the actual ranks of the items gives better results than using the pairwise comparison of rankings. Missing data lead to loss in statistical power, and in some cases, the loss is substantial. The degree of power loss depends on the missing mechanism and the method of imputing the missing data, among other factors. 2008-03-01T08:00:00Z text application/pdf https://ink.library.smu.edu.sg/soe_research/1949 info:doi/10.1037/1082-989X.13.1.58 https://ink.library.smu.edu.sg/context/soe_research/article/2948/viewcontent/TestingIntergroupConcordanceJudges_2008.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Economics eng Institutional Knowledge at Singapore Management University concordance intergroup Kendall's W missing data ranking experiment Econometrics Psychology
institution Singapore Management University
building SMU Libraries
continent Asia
country Singapore
Singapore
content_provider SMU Libraries
collection InK@SMU
language English
topic concordance
intergroup
Kendall's W
missing data
ranking experiment
Econometrics
Psychology
spellingShingle concordance
intergroup
Kendall's W
missing data
ranking experiment
Econometrics
Psychology
DEKLE, Dawn J.
LEUNG, Denis H. Y.,
ZHU, Min
Testing intergroup concordance in ranking experiments with two groups of judges
description Across many areas of psychology, concordance is commonly used to measure the (intragroup) agreement in ranking a number of items by a group of judges. Sometimes, however, the judges come from multiple groups, and in those situations, the interest is to measure the concordance between groups, under the assumption that there is some within-group concordance. In this investigation, existing methods are compared under a variety of scenarios. Permutation theory is used to calculate the error rates and the power of the methods. Missing data situations are also studied. The results indicate that the performance of the methods depend on (a) the number of items to be ranked, (b) the level of within-group agreement, and (c) the level of between-group agreement. Overall, using the actual ranks of the items gives better results than using the pairwise comparison of rankings. Missing data lead to loss in statistical power, and in some cases, the loss is substantial. The degree of power loss depends on the missing mechanism and the method of imputing the missing data, among other factors.
format text
author DEKLE, Dawn J.
LEUNG, Denis H. Y.,
ZHU, Min
author_facet DEKLE, Dawn J.
LEUNG, Denis H. Y.,
ZHU, Min
author_sort DEKLE, Dawn J.
title Testing intergroup concordance in ranking experiments with two groups of judges
title_short Testing intergroup concordance in ranking experiments with two groups of judges
title_full Testing intergroup concordance in ranking experiments with two groups of judges
title_fullStr Testing intergroup concordance in ranking experiments with two groups of judges
title_full_unstemmed Testing intergroup concordance in ranking experiments with two groups of judges
title_sort testing intergroup concordance in ranking experiments with two groups of judges
publisher Institutional Knowledge at Singapore Management University
publishDate 2008
url https://ink.library.smu.edu.sg/soe_research/1949
https://ink.library.smu.edu.sg/context/soe_research/article/2948/viewcontent/TestingIntergroupConcordanceJudges_2008.pdf
_version_ 1770573374264705024