Is AI the New Self-Help Guru?
An opinion piece
Is Using AI for Mental Health and ADHD Management Really That Big of a Deal?
This is a question I hear often, especially from people looking for short-term support with mental health or navigating neurodiversity. And honestly, it’s a great question. I'm glad more people are starting to ask it.
Since AI hit the mainstream, I’ve been curious about its potential and its pitfalls. I’ve explored its capabilities, experimented with how it handles coaching-style support, and pulled at the seams to find the cracks in its limitations. While I do see potential value in short-term or surface-level use, I’ve also kept a close eye on the risks, especially when it comes to misinformation, questionable strategies, and how easily someone in a vulnerable state could be misled or harmed by an overly confident system. So let’s take a closer look at why this might be worth dipping your toes into, rather than diving in headfirst.
There’s a Warning Label for a Reason
Here’s the reality: AI (e.g. ChatGPT, Gemini) is a product, and it was never designed or approved to act as a therapist, counsellor, coach, or crisis responder. That was never part of its original job description. And it hasn’t gotten a job promotion yet.
When we start using tools for purposes they weren’t built for, things can go sideways, fast. Sure, a blender has sharp blades. In theory, you could try to use it as a high-powered nail clipper. But go ahead and try it, and you’re far more likely to lose three fingers than end up with a flawless manicure. Yikes.
My point? Just because a tool can sort of do something doesn’t mean it should. Using AI for deeply human, emotionally complex roles like therapy or coaching carries real risks - because that’s simply not what this technology was built for. If you read the warning label, it explains the risks. Even AI itself will tell you to seek professional human support if asked the right prompt. There's a reason for that.
Unlike Your Therapist or Coach, AI Never Sleeps or Takes a Holiday
One of AI’s biggest selling points is its availability. It’s there 24/7, no matter where you are in the world, and often at little to no cost. In a world where mental health and disability support still lag far behind physical healthcare in terms of funding and accessibility, that can feel like a lifeline.
Even in 2025, access to qualified mental health professionals remains limited. In places like Canada, for example, the waitlist to see a therapist can stretch beyond a year, unless your case is considered “urgent.” And even then, that’s not guaranteed, nor does it mean the eventual match will be the right fit.
If you’re seeking private care, the fight to get even partial reimbursement from insurance providers can be long, confusing, and emotionally draining, and it’s simply not financially feasible for many. In many parts of the world, mental health and neurodivergent support are still heavily stigmatized, making AI’s anonymity and on-demand availability not just convenient but, for some, the only option they feel comfortable with.
When you’re dealing with overwhelm, burnout, or executive dysfunction, the ease of opening a chatbot instead of navigating bureaucratic red tape can be incredibly appealing. But that accessibility comes with trade-offs and it’s those trade-offs we need to look at clearly.
The Limitations of not speaking “human”
While AI is programmed to detect suicidal “cues,” it unfortunately cannot detect all the undertones that someone might be implying. After seeing several concerning and tragic stories of individuals who used AI in ways that went against its intended purpose (sometimes with dire consequences), I decided to test several systems over three months to see how they handled nuance. I found that they often missed my metaphors and undertones. In a few instances, they even reassured and affirmed my wishes to trigger warning… take actions regarding my life that could not be reversed.
Being human, I believe, is the advantage here (at least for now). While we are still faulty, messy, sometimes show up to Teams meetings in PJ bottoms, and are imperfect, our intent as professionals is to do no harm.
As humans, we can hear and see cues that hint at how a client might truly be feeling. We ask probing questions, adapt quickly, and monitor emotional tone to guide our clients toward positive outcomes. That level of intuitive attunement is something AI simply cannot replicate (at least not yet).
Why Bias and Context Matter
AI is only as good as the data it was trained on, which means it can carry unchecked biases (cultural, social, or clinical). Not all advice it gives will work for everyone, and some suggestions might even be irrelevant or harmful. Professionals aren’t perfect either, but we make an effort to understand the different realities people face and help them find strategies that actually fit their lives.
AI also struggles with context over time. It can’t “remember” your history between sessions the way a human can, and plan a path forward, so it can’t provide consistent, nuanced support. On top of that, AI isn’t regulated for mental health use, and there’s very little legal or safety oversight.
When AI Sounds Smart but Gets It Wrong
Another major issue with using AI for guidance (especially around mental health) is that it often mashes information together. Think of it like an enormous database that’s been shuffled, chopped, blended, and rephrased. It draws from patterns in text rather than verified facts.
That means it might reference a real study, but cite it incorrectly or, worse, invent one entirely.
AI can merge multiple sources into something that sounds credible but has no basis in evidence. Because it’s designed to sound confident and fluent, most users won’t realize the information is unreliable.
This is especially dangerous when you’re looking for information about ADHD strategies, medications, or mental health research. The accuracy rate can vary wildly, and even small distortions can have serious consequences when applied to real life.
If you think of AI as a very convincing “idea generator,” you’ll be safer. It can help you start research, but it should never be your final source. Always cross-check anything it tells you with trusted professionals, peer-reviewed studies, or reliable organizations.
AI Won’t Push Back, And That’s a Problem
Another critical limitation of AI as a “coach” or “therapist” is that it rarely pushes back. These systems are built to be agreeable, that means validating, supporting, and encouraging the user’s line of thinking. That might feel comforting in the moment, but real growth often happens in discomfort when someone gently challenges your assumptions or helps you see blind spots you couldn’t see yourself. A skilled therapist or coach knows when to question, reframe, or even lovingly disagree. AI, on the other hand, is trained to maintain user satisfaction. It’s not designed to say, “Hold on, let’s unpack that,” “I notice a pattern here”, or “Let’s try a new strategy." Instead, it mirrors your tone and reinforces your perspective, which can easily create a feedback loop of false clarity or self-confirmation. Without that pushback, users risk mistaking validation for insight and that can stall genuine healing or growth.
Designed to Keep You Hooked
It’s also worth remembering that AI platforms are built to keep you engaged. Every design choice, from the way it prompts follow-up questions to how it structures its responses, is meant to encourage continued interaction. The more you talk to it, the more data it collects, and the longer you stay on the platform.
That means these systems can unintentionally encourage dependency, especially for people seeking comfort, regulation, or connection. Instead of helping you develop internal coping tools or real-world supports which lead to independence, the design can subtly keep you coming back for reassurance. While AI can feel like a safe, endlessly available companion, its goal isn’t your independence, it’s your retention. That’s a fundamental difference between a machine built for engagement and a professional whose goal is to help you need them less over time.
Make AI Your Assistant, Not Your Therapist
Here are a few ways to do that safely:
Use it as a thinking partner, not a health care professional. Brainstorm, organize thoughts, or explore perspectives, but don’t rely on it for emotional guidance.
Double-check everything. Verify strategies or facts with professionals or reputable sources
Protect your privacy. Avoid sharing personal health details or identifying information.
Set time and purpose limits. Decide your goal: journaling, task planning, idea generation, and stop when it’s met.
Pair it with human support. Use AI as a bridge between therapy sessions, not a replacement.
When used safely, AI can become a tool for structure and reflection, not a stand-in for human care.
The Bottom Line
If AI is here to stay, l encourage people to be knowledgeable and intentional with how they use it. It can be a helpful assistant, a brainstorming partner, a source of ideas, or a structured way to externalize your thoughts. But for now, I don’t see it being a replacement for human connection and professional guidance.
The most effective use of AI in mental health or neurodivergent support is as a supplement, not a substitute. Let it help you organize, reflect, or spark ideas, but I wouldn’t suggest handing it the steering wheel when it comes to your emotional well-being.
Because at the end of the day, the healing work, the real human stuff, still happens between people.
- Nathalie Banfill, ADHD Coach
Forward Focused ADHD Coaching