Fairness in Justice: Use of Machine Learning in Pre-trial Detention
A large body of social science evidence indicates that objective, reliable and valid risk assessment instruments are more accurate in evaluating risk than professional human judgements alone. In the world of pretrial detention, where more than 10 million people are jailed each year in the United States after arrest, pretrial risk assessment tools may provide a more efficient, transparent and fairer basis for making assessments than having a judge quickly scan documents detailing the defendant’s prior record and current charges and make a decision in mere minutes. However, these assessments will retain any bias present in the data used by criminal justice agencies.
The pretrial risk assessment tools in use across the United States vary in how they estimate risk, the factors they assess and their source of information. Eric Ghysels, professor of finance and economics at UNC Kenan-Flagler Business School, co-authored a recent paper exploring machine learning classification problems. Ghysels applied his findings to the predictive validity of algorithmic decision-making in pretrial detention assessments, suggesting a shift to the use of a binary choice model. This model would better inform judicial decision-making by facilitating a more accurate identification of low-risk defendants as opposed to those who present greater risk.
Our panel of experts included W.R. Kenan Jr. Distinguished Professor of Public Law and Government and Director of UNC’s School of Governments’ Criminal Justice Innovation Lab Jessica Smith, The Markup Editor-in-Chief and Founder Julia Angwin, Rethinc. Labs Faculty Director and Edward Bernstein Distinguished Professor of Economics and Professor of Finance Eric Ghysels and North Carolina State University Professor of Information Systems/Technology Dr. Fay Cobb Payton.
They shared their technological, legal and social expertise to answer the questions raised by the real-world performance of risk assessment instruments.