Member-supported news for Southern California
Play Live Radio
Next Up:
0:00
0:00
Available On Air Stations
Support for LAist comes from:

Do AI risk-assessment scores make pre-trial sentencing less biased?

A woman stands in the doorway of a courtroom closed due to budget cuts and layoffs, at the Stanley Mosk Courthouse in downtown Los Angeles March 16, 2009.  Beset by an unprecedented budget crisis, the LA Superior Court, the largest trial court system in the US, today laid of 329 employees and announced the closure of 17 courtrooms, with more of both expected in the future.  AFP PHOTO / Robyn BECK (Photo credit should read ROBYN BECK/AFP/Getty Images)
Robyn Beck/AFP/Getty Images
A room in the Stanley Mosk Courthouse in downtown Los Angeles on March 16, 2009.

The decision of whether to release a defendant on bail and on which conditions is usually left in the hands of judges, but some courtrooms are now turning to risk-assessment AI systems in an effort to make the process less biased.

The decision of whether to release a defendant on bail and on which conditions is usually left in the hands of judges, but some courtrooms are now turning to risk-assessment AI systems in an effort to make the process less biased.

One commonly used system -- Laura and John Arnold Foundation’s Public Safety Assessment -- is now used in nearly 38 jurisdictions, including four counties and one city in Arizona, and Santa Cruz County in CA. The system  processes data on a defendant based on factors such as their prior convictions, past behavior and age, to create two scores on a scale of 1-6: the likelihood that a defendant will skip out on their court date and the likelihood that they will commit another crime. These scores are one of the many factors that a judge can choose to incorporate into their pre-trial sentencing decision.

Proponents of using AI systems in pre-trial sentencing are hopeful that this will reduce human bias and even replace the cash bail system. But critics are afraid that judges will grow too reliant on these scores. And there are concerns that the system itself may have prejudice baked into it. The argument goes that since these risk-assessment systems rely on data about prior convictions and people of color interact more with the criminal justice system because of pre-existing human bias, they will end up with higher risk scores than white defendants.

We talk with a researcher who is currently running a study on the Arnold Foundation’s Public Safety Assessment scoring system, as well as a professor who studies algorithmic fairness. 

Guests:

Christopher Griffin, research director at Harvard’s Access to Justice Lab, which evaluates new ideas in civil and criminal justice; the lab is currently assessing the Laura and John Arnold Foundation’s Public Safety Assessment, a risk-assessment scoring system

Suresh Venkatasubramanian, professor of computing at the University of Utah and a member of the board of directors for the ACLU Utah; he studies algorithmic fairness

Stay Connected