Coder reliability and misclassification in the human coding of party manifestos

The Comparative Manifesto Project (CMP) provides the only time series of estimated party policy positions in political science and has been extensively used in a wide variety of applications. Recent work (e.g., Benoit, Laver, and Mikhaylov 2009; Klingemann et al. 2006) focuses on nonsystematic sourc...

Full description

Saved in:
Bibliographic Details
Main Authors: MIKHAYLOV, Slava, LAVER, Michael, BENOIT, Kenneth
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2012
Subjects:
Online Access:https://ink.library.smu.edu.sg/soss_research/3983
https://ink.library.smu.edu.sg/context/soss_research/article/5241/viewcontent/coder_reliability_and_misclassification_pvoa_cc_by.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
id sg-smu-ink.soss_research-5241
record_format dspace
spelling sg-smu-ink.soss_research-52412024-09-02T06:21:51Z Coder reliability and misclassification in the human coding of party manifestos MIKHAYLOV, Slava LAVER, Michael BENOIT, Kenneth The Comparative Manifesto Project (CMP) provides the only time series of estimated party policy positions in political science and has been extensively used in a wide variety of applications. Recent work (e.g., Benoit, Laver, and Mikhaylov 2009; Klingemann et al. 2006) focuses on nonsystematic sources of error in these estimates that arise from the text generation process. Our concern here, by contrast, is with error that arises during the text coding process since nearly all manifestos are coded only once by a single coder. First, we discuss reliability and misclassification in the context of hand-coded content analysis methods. Second, we report results of a coding experiment that used trained human coders to code sample manifestos provided by the CMP, allowing us to estimate the reliability of both coders and coding categories. Third, we compare our test codings to the published CMP "gold standard" codings of the test documents to assess accuracy and produce empirical estimates of a misclassification matrix for each coding category. Finally, we demonstrate the effect of coding misclassification on the CMP's most widely used index, its left-right scale. Our findings indicate that misclassification is a serious and systemic problem with the current CMP data set and coding process, suggesting the CMP scheme should be significantly simplified to address reliability issues. 2012-12-01T08:00:00Z text application/pdf https://ink.library.smu.edu.sg/soss_research/3983 info:doi/10.1093/pan/mpr047 https://ink.library.smu.edu.sg/context/soss_research/article/5241/viewcontent/coder_reliability_and_misclassification_pvoa_cc_by.pdf http://creativecommons.org/licenses/by/3.0/ Research Collection School of Social Sciences eng Institutional Knowledge at Singapore Management University Policy positions nominal scales agreement words texts Models and Methods Political Science
institution Singapore Management University
building SMU Libraries
continent Asia
country Singapore
Singapore
content_provider SMU Libraries
collection InK@SMU
language English
topic Policy positions
nominal scales
agreement
words
texts
Models and Methods
Political Science
spellingShingle Policy positions
nominal scales
agreement
words
texts
Models and Methods
Political Science
MIKHAYLOV, Slava
LAVER, Michael
BENOIT, Kenneth
Coder reliability and misclassification in the human coding of party manifestos
description The Comparative Manifesto Project (CMP) provides the only time series of estimated party policy positions in political science and has been extensively used in a wide variety of applications. Recent work (e.g., Benoit, Laver, and Mikhaylov 2009; Klingemann et al. 2006) focuses on nonsystematic sources of error in these estimates that arise from the text generation process. Our concern here, by contrast, is with error that arises during the text coding process since nearly all manifestos are coded only once by a single coder. First, we discuss reliability and misclassification in the context of hand-coded content analysis methods. Second, we report results of a coding experiment that used trained human coders to code sample manifestos provided by the CMP, allowing us to estimate the reliability of both coders and coding categories. Third, we compare our test codings to the published CMP "gold standard" codings of the test documents to assess accuracy and produce empirical estimates of a misclassification matrix for each coding category. Finally, we demonstrate the effect of coding misclassification on the CMP's most widely used index, its left-right scale. Our findings indicate that misclassification is a serious and systemic problem with the current CMP data set and coding process, suggesting the CMP scheme should be significantly simplified to address reliability issues.
format text
author MIKHAYLOV, Slava
LAVER, Michael
BENOIT, Kenneth
author_facet MIKHAYLOV, Slava
LAVER, Michael
BENOIT, Kenneth
author_sort MIKHAYLOV, Slava
title Coder reliability and misclassification in the human coding of party manifestos
title_short Coder reliability and misclassification in the human coding of party manifestos
title_full Coder reliability and misclassification in the human coding of party manifestos
title_fullStr Coder reliability and misclassification in the human coding of party manifestos
title_full_unstemmed Coder reliability and misclassification in the human coding of party manifestos
title_sort coder reliability and misclassification in the human coding of party manifestos
publisher Institutional Knowledge at Singapore Management University
publishDate 2012
url https://ink.library.smu.edu.sg/soss_research/3983
https://ink.library.smu.edu.sg/context/soss_research/article/5241/viewcontent/coder_reliability_and_misclassification_pvoa_cc_by.pdf
_version_ 1814047827295207424