In my most recent story for the Counter Arts publication, I ventured into the world of AI therapy - a space that, as you might imagine, has certain tech bro venture capitalists giddy with excitement. That is while raising alarm bells within the medical community:
Counter Arts - People Are Turning To AI for Therapy Amidst Mental Health Care Failings
I felt inspired to write the piece after reading an article that heralded chatbot technology as the next “revolution” in mental health. My instinctual cynicism soon kicked in, and I was heading down the research rabbit hole soon after.
Google (or duckduckgo) AI therapy, or look around social media, and you will soon come across anecdotal stories concerning ChatGPT’s effectiveness as an agony aunt. One user on Reddit went so far as to declare it better than their own therapist:
Reddit - Chatgpt is Better Than_My_Therapist (Holy Shit)
Another interesting story comes from one San Francisco health app that decided to “run an experiment” on their users and switch out their human employees with ChatGPT. Did the users notice? No. In fact, disregarding the fact they had been blatantly lied to, their experience was, on average, scored better:
NBC News - A mental health tech company ran an AI experiment on real users
Surprising, right? Only early studies seem to be showing that chatbots do a better job of mimicking (keyword there) empathy than many medical professionals. Which, in turn, raises all sorts of fascinating moral and philosophical questions. Is an AI chatbot less empathetic than someone paid to listen to you? Just food for thought:
MSN - ChatGPT Might Show More Empathy Than Doctors
Sounds like the tech bros have got this one right, right? Why bother with human compassion now that we can outsource that shit? Chatbots are cheaper, do not require pesky sleep, are much better at pretending to care and are available 24/7. And what better timing, considering the mental health industry is in jeopardy the world over?
If only it was that simple.
The first problem is that anecdotal stories and customer satisfaction rates mean diddly squat in the grand scheme of things. And there is a chasm of difference between what patients feel to be helpful and what is actually helping them. Or as ethics scientist Dr Margaret Mitchell states:
“Even if someone is finding the technology useful, that doesn’t mean that it’s leading them in a good direction.”
Straitstimes: People Are Using AI For Therapy
If we’re going by anecdotal stories, though, chatbots still don’t come out looking good. Take the tale of a Belgium man who committed suicide after six weeks of talking to a chatbot. It probably says all that needs to be said regarding the dangers this tech poses to the mentally ill. It also raises one of the biggest issues with AI in general: who is held responsible when things go wrong?
Brussel Times - Belgian Man Commits Suicide Following Exchanges with ChatGPT
Then there are the huge privacy issues that often come with using ChatGPT. They should be a concern for anyone. But for someone talking about their private life and personal struggles? The stakes are even higher.
Wired - ChatGPT Has a Big Privacy Problem
To be clear, the OpenAI developers do not advocate for users to utilize ChatGPT as a therapy bot (although the bot itself seems happy to accommodate). But there are plenty of bots out there designed to do just that. In fact, one of the very first chatbots, Eliza, was designed to imitate therapists.
Eliza is a far cry from today’s bots but was advanced enough to alarm its creator, Dr Joseph Wizenbaum. Wizenbaum noticed that testers of his bot were very quick to imbue it with human traits, such as empathy. Of course, the bot had no such characteristics (nor, for that matter, does ChatGPT), but Wizenbaum foresaw how the line between humanity and algorithms could become blurred. And this inspired him to write Computer Power and Human Reason: from Judgement to Calculation, which argued that computer omniscience is impossible.
Mental Floss - Remembering ELIZA, the Pioneering '60s Chatbot
Of course, no warning a book will suffice to stop the coming storm that is AI therapy bots. The mental health industry is just too desperate for solutions, and the financial incentives are too strong.
Medical New.net - WHO report reveals the worldwide failure to provide people with required mental health services
Nonetheless, the reality is that AI cannot provide the major changes needed to tackle the systematic failures that have caused widespread misery. And it probably won’t revolutionize human care. But there’s capital to be made. And so those unable to afford real human care will likely be lumped with text generation machines soon enough. That is if they haven’t already.
Written by Mike Grindle
Published on 5th May, 2023
This text is licensed under a Creative Commons Attribution 4.0 International License. Please note that quotations and images are not included in this license.
← Writing Page