We have completed maintenance on DiscoverMagazine.com and action may be required on your account. Learn More

Your Next Therapist Could Be a Chatbot App

As we face staffing shortages and the limits of human analysis, AI is expanding our approach to mental health.

By Molly Glick
Jul 1, 2022 5:00 AMJul 25, 2022 3:11 PM
Chatbot illustration
Illustration by Kellie Jaeger

Newsletter

Sign up for our email newsletter for the latest science news
 

This story was originally published in our July/August 2022 issue as "Chatbots Have Arrived." Click here to subscribe to read more stories like this one.


Reclining on a therapist’s couch as they scribble away on a notepad may soon become more of a rarity. For starters, a growing shortage of mental health professionals could make conventional therapy more difficult to find: Right now, the U.S. needs over 6,300 additional providers to accommodate the current demand. And as of spring 2021, over a third of Americans lived in regions with relatively low access to such care, a phenomenon that’s more pronounced in rural areas.

But a growing crop of artificial intelligence technologies might be able to address those needs. Today, anyone with a smartphone can access conversational agent phone apps, or chatbots, meant to help users cope with the anxieties of daily life. These AI language-processing systems can imitate human discussion via text — in this case, by simulating conversations with a therapist. (For now, though, they aren’t usually branded as therapy.) One version, Woebot, was launched by clinical research psychologist Alison Darcy and other Stanford University scientists in 2017. Darcy says she looked into Woebot to tackle disparities in access to mental health care, though she acknowledges it’s only part of the solution.


Read More About AI in Medicine:


Woebot is primarily intended to be there for users when a therapist isn’t, Darcy notes, or as a gateway to therapy, and can reinforce lessons from in-person sessions. It was programmed to incorporate aspects of established practices like cognitive behavioral therapy. When patients are struggling between visits, it can be challenging to recall the skills learned with a professional, Darcy says. “We thought if we can address that person in that moment of need, and if you make it simple for people to engage in those tools right then and there, then it could be really clinically valuable.”

Plus, the conversational agent is meant to casually approach difficult conversations with a sense of humor, Darcy says. It even tells jokes. This might put some users more at ease, though it may also highlight the tech’s limited emotional intelligence. “Whale hello there!” Woebot might say cheerily alongside a whale emoji, a playfulness not always appropriate for the subject at hand. (Scholars such as Viktor Frankl and Sigmund Freud have explored the role of humor in therapy, but they probably didn’t have emojis in mind.)

Chatbots aren’t the only AI innovations scientists have been developing for mental health. Over the past two decades, more and more researchers have advanced tools to supplement therapists, plan targeted treatment and offer insights that clinicians aren’t — and potentially transform the way we provide mental health care.

Psychology and Technology

Attempts to remold therapy are nothing new. The relationship between patients and therapists has long fluctuated in step with societal upheaval, says Hannah Zeavin, a historian of technology and mental health at the University of California, Berkeley. Clinicians have been holding sessions outside of the office for a century or more. As wars, pandemics and other disruptive tragedies limited therapy’s possibilities, the availability of trained professionals has ebbed and flowed according to patient demand. “Starting with the radio but moving very quickly through things like the suicide hotline and, indeed, AI, … there has been effort to batch-process patients,” Zeavin says. “How can we treat more patients with something that moves towards the universal?”

Some scientists believe data holds the key to addressing the current mental health crisis. For example, doctors struggle to reliably diagnose and effectively treat conditions such as schizophrenia, depression and ADHD because they tend to have symptoms that are hard to pin down. “Several complex mental disorders look alike in terms of their symptoms. They’re quite overlapping,” says Sunil Vasu Kalmady, a machine-learning researcher at Canada’s University of Alberta. “Also, psychiatric disorders don’t have any one unique characteristic that you can take from a patient which is a tangible measure, like blood pressure.”

Researchers hope that with the help of AI, the mind can speak for itself. By applying specially designed algorithms to brain scans, labs could identify distinctive features that determine a patient’s optimal treatment. To do that, however, Kalmady and colleague Russell Greiner must first separate the signal from the noise. Thanks to our roughly 100 billion neurons, a functional magnetic resonance imaging (fMRI) scan offers plenty of noise to analyze — the four-dimensional map depicts changes in the brain’s blood flow triggered by brain activity. Algorithms can therefore reveal important patterns in a specific region (or highlight interactions between multiple regions) that may be associated with specific disorders.

(Credit: Kellie Jaeger)

Many conditions, such as obsessive-compulsive disorder and schizophrenia, have been shown to share common characteristics in the brain; Kalmady and Greiner found similar patterns using their algorithms. This work may help improve diagnosis of psychiatric disorders in the future, though for now, it’s still early days. The researchers say the ultimate goal is prognostics — to use the technology to choose the best course of action for a given patient. Diagnosis is just one piece of the puzzle.

“It would be nice, instead of saying someone’s depressed, [to] instead say, ‘This is a person who could benefit from an SSRI,’ ” Greiner says. “That would be a place where machinery could perhaps help — to identify a treatment.”

Machine learning could also advance suicide-prevention efforts. Currently, doctors only have a slight advantage over random probability in recognizing this risk. But some teams claim their algorithms can predict attempts with significantly improved accuracy.

At Vanderbilt University, practicing internist and clinical informatician Colin G. Walsh and his colleagues create algorithms that automatically sift through medical files to calculate an incoming patient’s particular risk. Walsh’s models are based on data that are easily accessible to health care providers, such as a patient’s demographic information, past instances of self-harm and socioeconomic status. This alerts busy doctors and nurses when a patient might be at risk who may not have been screened otherwise. “There’s a lot of factors vying for clinicians’ attention,” Walsh adds.

Our cellphones could also provide an unprecedented look into the psyche, according to Nicholas Allen, director of the Center for Digital Mental Health at the University of Oregon. His lab created a phone app that monitors details like duration of use, physical activity and text frequency. It’s made to study teenagers and better understand indicators of their wellbeing, and isn’t necessarily meant for public use. But Allen is also developing other technologies, like an instant-messaging service with a human health coach or notifications that encourage healthy behaviors, that can prompt life-saving care without a long wait for an appointment. “We’re trying to get the mental health system to be something that just comes to you when you need it,” he says.

Digital Drawbacks

As with any field that’s ripe for digital revolution, applying AI to therapy could come with major caveats. Some technology ethicists worry computers could affect the quality of what were once largely human-directed processes. And, based on her research, Zeavin predicts real-life therapy sessions may grow even more exclusive than in the past and primarily serve wealthy patients. “As we see that kind of cash influx, I get worried about an increasing scale-up of things like Woebot,” she says.

What’s more, AI can blend the biases of its creators — primarily, white men — into these systems. For example, language-processing models, which are used in text-based tools like Woebot, are often biased toward Western expressions of mental distress, according to a 2020 study published in PLoS ONE. AI models also tend to replicate stereotypes that associate certain racial backgrounds with specific diagnoses. “The very people who have been pushed to the margins by therapeutic care are receiving care that is not made for them,” Zeavin says.

But these tools also could eliminate barriers to care, particularly for underserved populations. For example, a mobile therapy app or chatbot could serve as an affordable alternative to expensive one-on-one therapy. And people experiencing poverty could benefit from suicide-prediction algorithms, given that low socioeconomic status is associated with an increased risk.

Even if machine learning does succeed in democratizing therapy and advancing treatment options, it will always come with a particular limitation: As hard as we try, computers likely can’t replicate an old-fashioned office conversation between two living, breathing people. “There’s no replacement for human connection,” Darcy says. “But we need to augment [that] so that when people do see the human, they can make better use of that care.”

1 free article left
Want More? Get unlimited access for as low as $1.99/month

Already a subscriber?

Register or Log In

1 free articleSubscribe
Discover Magazine Logo
Want more?

Keep reading for as low as $1.99!

Subscribe

Already a subscriber?

Register or Log In

More From Discover
Recommendations From Our Store
Shop Now
Stay Curious
Join
Our List

Sign up for our weekly science updates.

 
Subscribe
To The Magazine

Save up to 40% off the cover price when you subscribe to Discover magazine.

Copyright © 2024 Kalmbach Media Co.