Change Talk Blog

AI in Corrections: A practical guide to using AI responsibly in justice settings

Written by The Change Companies | April 20, 2026

 

The conversation around artificial intelligence in corrections has shifted. A year or two ago, most of the field was skeptical. Today, the question isn't really whether to engage with AI — it's how to do it without causing harm, undermining trust or creating more problems than you solve.

Cautious optimism is an appropriate place to start for a field that works with some of society's most vulnerable people. The stakes of a bad decision or a biased algorithm are high.

Together, healthy caution and a sense of direction make a strategy. So here's a practical framework for thinking through AI adoption in your corrections setting: what to look for, what to push back on and how to tell whether a vendor is actually equipped to operate responsibly in your environment.

 

Start with the problem, not the technology

The most common AI misstep in any sector is choosing a solution before you've clearly defined the problem. In corrections, this can look like adopting a predictive tool because it seems innovative, without asking whether it actually helps staff do their jobs better or improves outcomes for the people in your programs.

Before you evaluate any AI-powered tool, get specific about what you're trying to solve. Is it an administrative burden (the documentation, session planning and reporting that pulls staff away from direct client contact)? Is it consistency (making sure evidence-based programming is delivered the same way across facilities and staff)? Is it visibility (understanding where participants are engaging and where they're falling through the cracks)?

Different problems call for different solutions, and a clear-eyed sense of your pain points is the best filter you have.

 

The questions every vendor should be able to answer

Not all AI is the same, and not all vendors have thought carefully about what responsible deployment looks like in a corrections context. Here are the questions worth asking before you sign anything.

How is participant data stored, protected and used? This is non-negotiable. Any platform handling protected health information about incarcerated or justice-involved individuals needs to meet HIPAA requirements at minimum. Platforms should also meet SOC 2 compliance standards for ensuring client data, above and beyond personal health information, is securely managed. Vendors should be able to explain clearly whether participant data is used to train AI models — as well as what consent looks like for that.

💡Pro tip: A closed system that doesn't share data externally is a meaningful differentiator.

How does the AI reach its conclusions? Opacity is a red flag. If a vendor can't explain, in plain language, how their system generates a recommendation or flags a risk, that's a problem. Staff need to be able to understand and contextualize what the tool is telling them, not just act on it.

Has the tool been tested for algorithmic bias? Research on recidivism risk assessment tools has documented serious racial and demographic disparities in AI-generated scores. Any responsible vendor should be actively testing for and monitoring bias — and should be transparent about what they've found and how they're addressing it.

What decisions is this tool designed to inform, and what decisions should it never be used for? AI that helps a counselor understand where a client is in their change process is fundamentally different from AI that influences sentencing or release. The appropriate scope of the tool should be defined clearly — by the vendor and by your agency — before deployment.

Where do human relationships matter more than efficiencies? While there’s little doubt AI can streamline administrative tasks, there is no discounting the value of human connections. Within your agency, determine where your staff are needed to build rapport and trust with participants, and where AI can give them back some time.

What does staff training and change management look like? Technology only works if the people using it understand it. A vendor that hands you a platform and disappears hasn't set you up for success. Ask what onboarding looks like, how staff questions get answered and how the tool evolves based on user feedback.

 

What responsible AI actually looks like in practice

There's a difference between AI that's bolted onto a product and AI that's integrated thoughtfully into a workflow that was already built around evidence-based practice. The latter is what correction settings actually need.

At The Change Companies, we've approached AI features on Atlas — our digital programming platform — from that direction. Atlas is built on the evidence-based practice of Interactive Journaling®, which incorporates Motivational Interviewing (MI), cognitive behavioral principles and the Stages of Change. AI features are layered onto that foundation, not substituted for it.

Two examples worth knowing about:

AI-generated progress summaries. When a participant completes journaling exercises in Atlas, the platform can generate an individualized summary for staff — highlighting areas of engagement, flagging potential concerns with color-coded alerts and surfacing insights that might otherwise get missed in a heavy caseload. The goal is to help staff have better, more informed conversations. Not to replace those conversations.

The Sessions tool, currently available for early access via waitlist, reduces the documentation burden on facilitators and can provide post-session coaching. The Sessions tool streamlines note-taking, highlights RNR domains of concern and acts as an MI coach — flagging areas where facilitators excel and where there may be room for improvement. Each strength of the smart Sessions tool gives facilitators time back — freeing them to focus on connecting with those they serve.

What would you do with 10 more hours per week? →

Both features are designed to give staff more information and more time — not to make decisions for them. That's the distinction that matters most. For a deeper look at how AI can be used to identify behavioral change in corrections clients, our post on AI, desistance, and the future of community supervision is worth a read.

 

A framework your agency can use today

Even if you're not evaluating a specific tool right now, it's worth putting a basic AI framework in place before you are. SAMHSA and the National Institute of Corrections both offer resources on data governance and responsible technology use that can serve as a starting point.

At minimum, consider establishing internal clarity on three things:

  1. What categories of decisions AI should and shouldn't inform in your setting.
  2. Who is responsible for reviewing and acting on AI-generated outputs.
  3. How you'll monitor for unintended consequences once a tool is deployed.

The agencies that will get the most out of AI are the ones that approach it the same way they approach any evidence-based practice: with clear goals, ongoing evaluation and a genuine commitment to using it in service of the people in their programs.

 

Want to see how Atlas uses AI responsibly to support corrections programming? Book a demo today →

Evidence-based, behavioral health Interactive Journaling® curricula are available digitally on Atlas. Atlas can save staff time while supporting fidelity to evidence-based practices.

Ready to see what Atlas can do for your program? Visit our website to schedule a personalized demo today. Learn more about Atlas →

Provide your information below for a complete overview of Atlas for your setting.