Forks over knives: Predictive inconsistency in criminal justice algorithmic risk assessment tools

Big data and algorithmic risk prediction tools promise to improve criminal justice systems by reducing human biases and inconsistencies in decision-making. Yet different, equally justifiable choices when developing, testing and deploying these socio-technical tools can lead to disparate predicted ri...

全面介紹

Saved in:
書目詳細資料
Main Authors: GREENE, Travis, SHMUELI, Galit, FELL, Jan, LIN, Ching-Fu, LIU, Han-wei
格式: text
語言:English
出版: Institutional Knowledge at Singapore Management University 2022
主題:
在線閱讀:https://ink.library.smu.edu.sg/sol_research/4397
https://ink.library.smu.edu.sg/context/sol_research/article/6355/viewcontent/jrsssa_185_supplement_2_s692.pdf
標簽: 添加標簽
沒有標簽, 成為第一個標記此記錄!
實物特徵
總結:Big data and algorithmic risk prediction tools promise to improve criminal justice systems by reducing human biases and inconsistencies in decision-making. Yet different, equally justifiable choices when developing, testing and deploying these socio-technical tools can lead to disparate predicted risk scores for the same individual. Synthesising diverse perspectives from machine learning, statistics, sociology, criminology, law, philosophy and economics, we conceptualise this phenomenon as predictive inconsistency. We describe sources of predictive inconsistency at different stages of algorithmic risk assessment tool development and deployment and consider how future technological developments may amplify predictive inconsistency. We argue, however, that in a diverse and pluralistic society we should not expect to completely eliminate predictive inconsistency. Instead, to bolster the legal, political and scientific legitimacy of algorithmic risk prediction tools, we propose identifying and documenting relevant and reasonable ‘forking paths’ to enable quantifiable, reproducible multiverse and specification curve analyses of predictive inconsistency at the individual level.