Blog/Technology

AI in Criminal Justice: The Promise and the Danger

J
Joseph McLean-Arthur
January 28, 2026 · 7 min read

Algorithmic tools are already influencing bail decisions, parole determinations, and policing strategies. The question is not whether AI will be used in criminal justice. It already is.

Algorithmic tools are already shaping criminal justice decisions across the United States. Risk assessment instruments influence pretrial detention decisions in dozens of jurisdictions. Predictive policing software shapes patrol deployment in major cities. Automated license plate readers track movement. Facial recognition identifies suspects. The question is not whether AI will be used in criminal justice. It already is. The question is whether it will be deployed with adequate oversight, transparency, and accountability.

The case for algorithmic tools is real. Human decision-making in criminal justice is also subject to bias. A judge who is hungry, tired, or unconsciously influenced by a defendant's appearance makes worse decisions. Research by Danziger and colleagues found that parole grant rates fell from 65 percent to nearly zero across a judge's decision sessions before rising again after meal breaks. If an algorithm can make more consistent decisions, averaging out some of the arbitrary variation in human judgment, that could reduce disparity.

The problem is that algorithms do not simply replace human bias with neutral computation. They encode the biases present in the historical data on which they are trained. The COMPAS risk assessment tool, examined extensively by ProPublica in 2016, was found to misclassify Black defendants as higher risk at nearly twice the rate of white defendants. The tool's creator argued that it was calibrated to be equally accurate across racial groups on a different metric. Both claims were simultaneously true, because of mathematical constraints in how different fairness definitions interact. There is no algorithm that satisfies all fairness criteria simultaneously. Every design choice about what fairness means reflects values, not just statistics.

Predictive policing raises a distinct concern: feedback loops. If an algorithm directs officers to patrol more intensively in certain neighborhoods, more arrests occur in those neighborhoods, which are then fed back into the algorithm as evidence of high crime, which directs more policing there. The system amplifies patterns from historical data without any mechanism to distinguish genuine crime concentration from historical over-policing of specific communities.

Facial recognition presents arguably the sharpest immediate danger. A National Institute of Standards and Technology evaluation found that facial recognition algorithms misidentify Black faces at rates 10 to 100 times higher than white faces depending on the algorithm tested. Wrongful arrests based on facial recognition matches have already occurred, with documented cases in Detroit and New Orleans where innocent people were arrested based on algorithmically generated misidentifications. The stakes of these errors are not abstract. They are wrongful detention.

The path toward responsible use is not to reject algorithmic tools entirely. It is to require transparency about how tools work, mandate disclosure to defendants when algorithmic assessments influence decisions about them, require independent auditing for racial and other disparities, and subject deployments to meaningful public oversight. Several states have enacted algorithmic accountability legislation; most have not.

At AltReform, we use AI to assess reform proposals, not people. That distinction is deliberate. Scoring a policy for likely outcomes is fundamentally different from scoring a human being for predicted behavior. The former is a research and advocacy tool. The latter carries the weight of someone's freedom.

Share this analysis

This work is free to share for advocacy, journalism, and research purposes.

Share via Email →
← All Posts