Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
As mental health AI agents proliferate, Utah has found an innovative regulatory approach that walks the line between fostering innovation and keeping people safe—and other policymakers are paying attention.
Mental health AI agents (MHAs) are showing up everywhere—from clinical offices to consumer apps. But here’s the thing: nobody’s written a clear rulebook yet. FDA medical device rules, state licensure laws, and consumer protection statutes are all over the map. That’s where Utah’s Office of AI Policy stepped in. They ran a comprehensive stakeholder study to figure out evidence-based regulations, bringing together mental health practitioners, academics, tech companies, and people with real-world experience using these tools.
What the study uncovered was revealing: stakeholders didn’t see eye to eye. Practitioners worried about safety, while people who’d actually used these systems reported real benefits. Interestingly, over 80% of psychiatrists said they need more help understanding how generative AI fits into mental health care. Academics zeroed in on algorithmic bias as a key concern, and the public fretted about users developing romantic attachments to AI systems.
The real problem was that overlapping regulations were creating friction for anyone trying to build MHAs. Utah’s answer? A “safe harbor” model—one that pumps the brakes on professional licensure enforcement while still requiring companies to prove they follow best practices. That means pre-deployment safety testing, clinical advisory boards, and clear escalation protocols. And what’s clever about this approach is that it allows different deployment models to coexist: AI helping with therapy homework, supervised “AI resident” systems working alongside clinicians, and standalone MHAs for specific conditions.
Here’s the paradox: regulations that are too strict can actually backfire, pushing people toward unregulated chatbots that have even fewer safeguards. Well-designed MHAs with real safety measures could do more good than these unregulated alternatives. The bottom line for policymakers? Think about risk-benefit tradeoffs and keep monitoring what actually happens in the real world.
Worth noting: this was a policy study built for stakeholder engagement, not hard-science research. We still need more clinical outcome data.
Original paper: The doctor is not in, but the Chatbot is: Utah’s experience regulating mental health AI. — NPJ digital medicine. 10.1038/s41746-026-02580-y