The Therapist In The Machine
Can’t find or afford a therapist? There’s an app for that. Maybe you’ll chat with Broken Bear, who has purple fur and a patch covering the scar from a broken heart. Or a wise panda in a fedora. Or a “bespectacled and bemused” human avatar named Paul. AI therapy might be the accessible future of wellness… or a dangerous path fraught with platitudes and bad advice. Jess McAllen takes a deep dive into the recent history of therapeutic approaches to mental illness, AI therapy chatbots, and what happens when the logical rigor of technology butts up against the chaotic reality of human brains.
The website for the AI therapy tool Elomia claims that 85 percent of clients felt better after their first conversation, and that in 40 percent of cases, “that’s the only help needed.” But much like certain therapists who refuse to take on clients with a history of hospitalization, AI therapy works best when you discuss predictable life events, like, “My boyfriend and I broke up.” A break-up! The bots have trained their whole lives for this. AI mirrors traditional mental health treatment in that more generic problems are still prioritized over complex mental health needs — until someone gets to the point of inpatient hospitalization, at least, and by then they’ve already suffered considerable distress. AI is meant to fill treatment gaps, whether their causes are financial, geographic, or societal. But the very people who fall into these gaps are the ones who tend to need more complex care, which AI cannot provide. The data has been consistent for many decades: serious mental illness is highly correlated with systemic racism, poverty, and the kinds of abuse that might create trust issues around seeing a human therapist. Those with serious mental illness are still left behind in the brave new world of mental health awareness, even when that world is virtual.