Millions of Americans are already using AI chatbots for mental health support. Some are turning to specialized therapeutic chatbots built on clinical principles. But most aren’t. They’re using ChatGPT and Claude—platforms never designed for clinical care.

PHTI recently held a convening on AI mental health tools that brought these tensions into sharp focus. Senior leaders from health systems, health plans, technology companies, investment firms, and federal agencies joined us to discuss performance benchmarks, deployment strategies, and the evolution of clinician roles as AI capabilities advance. We’re excited to share the findings from that gathering in the coming days, particularly because they underscore a critical insight: AI is already playing a role in mental health care, so how do we shape its role responsibly?

At best, chatbots offer an outlet for people struggling to find clinical support. At worst, vulnerable people are increasingly dependent on tools that are simply not equipped to help them navigate real mental health challenges. Striking the right balance between safety and speed feels perilous. Yet, we can’t afford to ignore the current reality or debate whether to act.

Instead of letting fear guide our decision-making, we must advance thoughtful conversations about evidence and oversight, including how to navigate patients between specialized, clinically-designed, mental health AI tools and the general-purpose large-language models already widely in use. At a recent FDA Digital Health Advisory Committee meeting, federal regulators acknowledged these challenges and signaled that more guidance is coming. It should. Regulatory guardrails can help sectors mature, guide innovation, and, most importantly, protect patients.

But what surprised me in that discussion was the reluctance of some committee members, including champions of digital health technology, to lean in on the adoption of these AI tools. I heard a consistent theme: nervousness, hesitation, and a reflexive “I’m not sure I’m comfortable with this” rather than “what would it take to make this safe and effective?” They seemed to be operating from a place of fear rather than solutions.

The numbers are stark: over a million people show signs of suicidal ideation in ChatGPT conversations every week. Hundreds of thousands show signs of psychosis or mania. Three percent of Claude conversations involve people seeking therapy or emotional support. These free chatbots aren’t optimal mental health solutions—they are what people have access to. Our traditional mental health system has failed them due to limited access, poor quality standards, and deeply entrenched social stigma.

So, the question isn’t whether we’re comfortable with AI mental health tools. The question is would using them—even in very limited, well-regulated ways—be better than what we have now? Better than a system where most people get no care at all, where there are no quality metrics for the care that does exist, where people are already using completely unregulated chatbots without any safety guardrails?

The regulatory gap isn’t hypothetical—it’s widening every day. While we debate and hesitate, the market is moving fast. We are already seeing anxiety about stricter FDA oversight from industry. In an uncertain policy environment, companies are positioning tools to potentially evade regulatory scrutiny. We can choose to lead this transition thoughtfully by establishing standards and frameworks that make these tools safer and more effective, or we can stick our heads in the sand while patients navigate this landscape alone. We don’t have the luxury of choosing inaction.

Smart oversight doesn’t mean blocking innovation. It means ensuring safety while enabling appropriate use. It means creating clear pathways for developers who want to build responsibly, and clear consequences for those who don’t. AI-enabled mental health tools have genuine advantages: endlessly patient and empathic, with consistent and non-judgmental engagement that doesn’t fatigue or burn out. They’re available at 2 AM when traditional care isn’t. They provide real access where none existed before for people facing barriers like cost, stigma, and availability.

Patients are counting on us to lead with guidelines that promote access and better clinical outcomes. Expanding access without appropriate safeguards isn’t supporting innovation—it’s irresponsible. At the same time, failure to act isn’t caution—it’s a choice that serves no one.

Regulators and the industry must come together to find a proactive framework: standards for clinical validation; transparency requirements for algorithmic decision-making; appropriate guardrails around which conditions and populations these tools can appropriately serve; distinction between general wellness support versus therapeutic efficacy claims; and human oversight with pathways to escalate to human providers when needed.

At PHTI, we’ve already started these conversations. We are gathering leaders in healthcare, technology, and policy to tackle exactly these challenges: how to assess AI-enabled health technologies rigorously, how to move past fear toward evidence-based frameworks, and how these tools can actually deliver on their promise to improve outcomes and access while reducing costs. The mental health chatbot space will be a test case.

In 2026 and beyond, we’re focused on ensuring the adoption of any AI technology addresses three critical questions: does it work, for whom, and is it worth it? Getting to those answers requires honest conversations and tough questions. Will we move quickly enough to establish smart oversight before the market grows too wild? Will we allow fear to prevent appropriate adoption of tools that could genuinely provide meaningful support? Or will we find a path that protects patients while enabling innovation?

The technology is already here. The question is what our response will be.