Cleveland and a growing number of other local and state courts, judges are now guided by computer algorithms before ruling whether criminal defendants can go free have to stay locked up awaiting trial.
A bipartisan bail reform movement has found an alternative to cash bail: AI algorithms that can scour through large sets of courthouse data to search for associations and predict which people are most likely to flee or commit another crime.
Experts say the use of these risk assessments may be the biggest shift in courtroom decision-making since American judges began accepting social science and other expert evidence more than a century ago.
Critics, however, worry that such algorithms could end up superseding a judges’ own judgment, and might even perpetuate biases in ostensibly neutral form.
States such as New Jersey, Arizona, Kentucky, and Alaska have adopted these tools. Defendants who receive low scores are recommended for release under court supervision.
Among other things, such algorithms aim to reduce biased rulings that could be influenced by a defendant’s race, gender or clothing — or maybe just how cranky a judge might be feeling after missing breakfast.
The AI system used in New Jersey, developed by the Houston-based Laura and John Arnold Foundation, uses nine risk factors to evaluate a defendant, including age and past criminal convictions. But it excludes race, gender, employment history and where a person lives.
It also excludes a history of arrests, which can stack up against people more likely to encounter police — even if they’re not found to have done anything.
An investigative report by ProPublica found that a commercial system called Compas used to help determine prison sentences for convicted criminals, was falsely flagging black defendants as likely future criminals almost twice as frequently as white defendants.
Other experts have questioned those findings, and the U.S. Supreme Court last year declined to take up a case of a Wisconsin man who argued the use of gender as a factor in the Compas assessment violated his rights.
Advocates of the AI approach argue that the people in robes are still in charge. Others worry the algorithms will make judging more rote over time. Research has shown that people tend to follow specific advisory guidelines in lieu of their own judgment, said Bernard Harcourt, a law professor at Columbia.