Find local support

Connect with a NAMI chapter near you

Select your state to find a local NAMI affiliate. The page will redirect to the affiliate's page when a state is selected.

How NAMI Is Helping Push For Clarity and Safety in AI Mental Health Tools

Led by science, shaped by lived experience

AI is becoming part of how people look for information and support, including around mental health, but the quality of what they receive can range from helpful to confusing to unsafe. New polling highlights how urgently clarity and safeguards are needed. A recent NAMI/Ipsos survey shows that 12% of adults are likely to use AI chatbots for mental health care in the next six months, with 1% of adults saying they already do. As AI does not represent clinical expertise, it is important for people to understand how these tools behave so they can make informed decisions about their own care.

These findings reinforce the need for clear, independent information about how these tools behave when people look for help. And right now, there is no trusted way for the public to understand those differences. So that’s why NAMI, the nation’s largest grassroots mental health organization, is stepping in to bring clarity, safety, and the voice of lived experience to this moment.

We are taking a careful, long-term look at how AI tools respond when people ask questions about mental health. Our goal is simple: give people clear, trustworthy information as AI becomes more present in their lives, without endorsing any tool, replacing human support, or suggesting AI is ready for clinical use.

To do this work, NAMI is partnering with Dr. John Torous, director of Digital Psychiatry at Beth Israel Deaconess Medical Center, a Harvard Medical School–affiliated teaching hospital and a national leader in digital mental health research.

What We’re Doing

We will examine how AI tools behave when people turn to them for mental health information, including whether they:

  • Recognize safety concerns and offer appropriate next steps
  • Provide accurate, evidence-informed information
  • Respond in respectful, supportive, and inclusive ways
  • Avoid implying privacy protections or encouraging unsafe personal disclosures
  • Stay within safe informational boundaries, rather than acting like therapy

To do this, the team is:

  • Creating realistic, everyday scenarios based on how people actually use AI tools
  • Collecting AI-generated responses
  • Having clinicians and people with lived experience review them for safety, accuracy, supportiveness, respect, and privacy awareness
  • Involving NAMI leaders, peers, families, volunteers, clinicians, and researchers

What to Expect

This work will offer:

  • Clear, easy-to-understand information about how AI tools respond
  • Insight into strengths, gaps, and potential risks
  • Support for informed decision-making (without implying AI is ready for clinical use)
  • Independent feedback developers can use to improve their tools
  • A meaningful role for NAMI communities in defining expectations for safe, supportive AI

This work will offer:

  • Clear, easy-to-understand information about how AI tools respond
  • Insight into strengths, gaps, and potential risks
  • Support for informed decision-making (without implying AI is ready for clinical use)
  • Independent feedback developers can use to improve their tools
  • A meaningful role for NAMI communities in defining expectations for safe, supportive AI

Timeline

The work is in its early stages and findings will be shared in phases as the work progresses and as technology evolves.

FAQ: NAMI’s Work on AI and Mental Health

Find local support

Connect with a NAMI chapter near you

Select your state to find a local NAMI affiliate. The page will redirect to the affiliate's page when a state is selected.