If justice is blind in a perfect world, it rarely is so in the real one. Research shows that in the United States, minorities like African-Americans, are more likely to be given harsh bail terms and be incarcerated than whites.
Similar prejudices probably exist in Latin America. Rigorous research on the issue is lacking, but it is possible that the low level of trust in the region’s legal systems is related to less fair treatment of certain population groups. That, moreover, may be linked to one of the most disturbing phenomena of justice in the region: the huge numbers of people in pre-trial detention. Jails are overflowing with people awaiting trial, many of them poor. Thirty-six percent of the prison population in the region has not been convicted — a figure rising to more than 50% in numerous countries, including Bolivia, Guatemala, Uruguay and Venezuela.
Disparities in the criminal justice system
The question is how to eliminate bias in the court system and ensure that judges dispense justice in a consistent fashion to eliminate some of these problems. A debate rages as to whether artificial intelligence, or algorithms, might be able to do so.
Proving bias is difficult. But recent research has credibly shown that minorities in the United States are unfairly treated in the handing out of speeding tickets, for example, and in being incarcerated at higher rates despite similar circumstances. There is also evidence of political bias in sentencing that similarly harms them.
Problems of consistency are also common. As other human beings, judges can be influenced by environmental factors, such as “decision fatigue,” which makes them more strict in granting parole when they have not had a chance for a recent food break. Or emotional stress when they hand out longer sentences after a favorite football team has unexpectedly lost a game.
Algorithms as a solution to discrimination?
In this context, something as seemingly objective as artificial intelligence seems to hold great promise. Most people, after all, have experienced the power of algorithms to recognize images and process enormous amounts of data in seconds. Furthermore, algorithms offer a transparent procedure that can be subject to continuous scrutiny — at least by a group of experts. The question is whether it is possible to leverage this technology to improve the justice system’s fairness.
Experts say there are limitations in artificial intelligence that can lead to social biases simply being reproduced. The problem, as scholars Solon Baracas and Andrew Selbst point out in a recent paper is that an “algorithm is only as good as the data it works with.” That, is to say, data can cause algorithms to “inherit the prejudices of prior decision makers” or “simply reflect the widespread biases that persist in society at large.”
These problems stem in part from obvious technical difficulties that are intrinsic to the use of algorithms in social science. One of them is sample size: Majorities are more accurately represented in algorithms because there is more data on them than on minorities. Cultural differences also play a role because patterns that are valid for the general population may not be so for minorities. And eliminating those differences is complicated because specifically labeling minority individuals in order to improve the algorithm may be objectionable in and of itself.
An intriguing study in New York
Despite those intrinsic limitations, a recent study has offered an interesting perspective on how to combine big data and judges’ decisions. It focuses on how algorithms would have performed in New York City’s court system in comparison to what judges actually did in determining which suspects should be released and which should be jailed before trial. It found some remarkable improvements.
The study involved ingenious design. Unable to know what crimes might have been committed by suspects who were jailed but might have been released, the authors created a counterfactual in which they compared suspects with very similar profiles who faced lenient or tough judges randomly assigned to their cases. Looking at the post-release record of those liberated by lenient judges, they could predict what the behavior of those who were jailed by tough judges might have been if they had also been set free. Then they could compare how judges decisions compared to decisions generated by artificial intelligence on the same population.
The study, rather than claiming that bias can be eliminated, showed algorithms offering a valuable contribution. Indeed, the study showed that had algorithms been used in the cases, crime could have been reduced by up to 24.7% with no change in jailing rates, or that jailing rates could have been reduced by 41.9% with no increase in crime rates. Moreover, those results could have been achieved even as the algorithms reduced disparities that adversely affect African-Americans and Latinos.
Judges confront difficult choices when they make decisions on pre-trial release. Releasing suspects risks the possibility that they might commit more crimes, flee, or threaten witnesses. Holding them may put their jobs in jeopardy, create financial and psychological trauma for them and their families, and even increase the possibility that they will be convicted, especially through guilty pleas.
Latin America probably doesn’t have enough sophisticated data or large enough sample sizes at present to begin using artificial intelligence in court decisions. But it can start collecting higher quality data at the individual case level, develop models for artificial intelligence, and test their performance against judges in simulations similar to that conducted in the New York study.
One interesting case is a replication of that study with Colombian data. The results are now being used to help prosecutors promote a more efficient use of prisons by detaining, prior to trial, only those who represent the highest risk. Eventually when this and similar models are thoroughly tested and proved effective, they might be widely used to help reverse the costly and socially-counterproductive practice of holding huge numbers of people in pre-trial detention.
Leave a Reply