AI Chatbots—With Guardrails—Can Help Treat Your Depression And Anxiety

News Room

Brian Chandler, a man in his 20s who works for a bank in Atlanta, suffered from severe anxiety at the start of the pandemic in 2020. While looking for support, he came across Woebot for Adults, a chatbot developed with a team of mental health professionals to provide users with supportive guidance via a series of pre-scripted messages that are engaging, witty, and empathic.

“From my first conversation,” Chandler recalls, “I felt there was something there to support me and help me break down my anxious thoughts. The experience feels very natural and very human, and that helps me trust it and get to the root of my issues.”

As we all unfortunately know, the pandemic put tremendous strain on mental health and caused many supply chain shortages, including of therapists whose workloads dramatically increased. Waiting times lengthened, leaving many people to suffer unaided. Even for those fortunate enough to have therapists, the cost and the limited window to connect – typically during business hours once a week — makes accessing professional help impractical on a daily basis.

That’s where apps like Woebot come in, which my team at Leaps has invested in. To date, around 1.5 million people have interacted with Woebot since its debut in 2017, and randomized, controlled studies have demonstrated its ability to reduce symptoms of anxiety and depression across people of varying demographics. Interestingly, 75 percent of users’ messages happen outside of business hours, when people can’t traditionally access a therapist. For these reasons, I’m bullish about the potential to tap into AI as a tool to scale mental health services to those in need, as long as the proper guardrails are in place to restrict inappropriate or harmful interactions.

Infamously, for example, a different AI chatbot launched by the non-profit National Eating Disorders Association reportedly advised some users to count calories and try to lose weight — exactly the wrong advice for those struggling with eating disorders. It has since been taken offline.

“That story about the eating disorder is a great example of all the many things you shouldn’t do when you’re trying to build [an AI chatbot],” said Joseph Gallagher, chief product officer at Woebot Health. “There was no design control, the client was unaware of what was happening, and they were jumping headfirst into using a technology without seemingly doing any testing.”

Now that generative AI like ChatGPT is rapidly advancing, it is imperative for developers of tools intended for therapeutic or supportive use to rigorously build in safety guardrails and test them in randomized trials before releasing them to consumers.

Woebot Health, for instance, is now studying how it can incorporate generative AI within the confines of research that has received approval from an institutional review board. Using it to understand users’ free text for a more personalized performance is a different and potentially easier application than using it to respond to users directly. That’s where extra caution, guardrails, and human oversight are needed.

“Not every [large language model] is the same, not every use case is the same,” Gallagher pointed out, “so in each instance you need to take a step back and understand what additional risk you might be bringing into place and how can you reduce it down and still get the benefit” of the chatbot’s personality remaining engaging. If users perceive a sudden and abrupt shift in tone, the bot risks alienating those it’s intended to help.

Managing Risk As The Tech Evolves

Matteo Malgaroli, assistant professor in the department of psychiatry at NYU Grossman School of Medicine, studies the role that digital tools can play in helping provide mental health care. While the field of psychology hotly debates whether AI has a place in therapy, Malgaroli believes it can play an important role in certain scenarios, such as assessment and session notes, freeing up the clinician for more human-human interaction. A conversational agent could be used to give feedback to the clinician on his or her empathy levels in rapport with a client.

But Malgaroli was apprehensive in an interview about generative AI providing therapy directly to users, at least today, and emphasized the need for rigorous study.

“The dangers are higher with LLMs to independently administer psychotherapy than the benefits right now,” he said, adding that the tools are evolving so fast, “you basically cannot publish a peer-reviewed paper.” He began a systematic review of AI tools for mental health a year and a half ago, before ChatGPT came out. The latest iteration, GPT-4, is even more advanced in its capacity to receive and generate outputs. It can reportedly invent new languages, build working websites from a sketch, draft lawsuits, and pass standardized exams with flying colors. Each model is 10x larger than the previous one.

As the technology advances, some experts believe that generative AI will become increasingly likely to produce desired responses and that the hallucination and bias problems of the early LLMs will lessen.

“The remarkable thing we’ve seen over the last three years is that as the models get larger, they get easier to control, which reduces the barrier to entry to be able to influence them,” said Mustafa Suleyman, the co-founder of Deep Mind, acquired by Google, and of Inflection AI, a machine learning startup.

He spoke recently at The Atlantic Festival at a talk underwritten by Leaps about the future of AI and mental health, sharing how his team at Inflection AI has developed a chatbot named PI that is unfailingly kind and supportive.

“If you try Pi today, it’s very difficult if not impossible to cause it to say something that is in any way homophobic, judgmental, racist, it doesn’t engage in any conspiracy theories, none of the prompt hacks work,” Suleyman said to audience applause. “It shows that if you’re very intentional and deliberate and start from first principles, trying to create a very boundaried and safe AI, it is possible.”

A Brave New World

Where the future will take us with generative AI is still anyone’s guess. Suleyman predicts the models will get “very, very, very powerful and things are going to get quite strange.” In fifteen years, he expects the models will be “indistinguishable from human performance on almost all tasks.”

We are no doubt at the dawn of a tsunami that is crashing over society in real time, as Suleyman explains in his new book The Coming Wave. It will leave no land, shore, or person untouched, transforming our lives as fundamentally as the invention of fire, language, and farming.

A common pattern when a powerful new technology arises is to fear and resist it, eventually accept that the genie is out of the bottle, and then finally take it for granted. In the first and second phases, where we are today, we risk becoming so overwhelmed by the unnerving prospect of disaster that we lose sight of its sweeping potential for good.

With AI chatbots for mental health, that potential is profound. In developed countries, for instance, there are nine psychiatrists per 100,000 people, and as few as 0.1 per 1,000,000 in low-income countries. Now is the time to double down on ethical innovation, studying and learning how these tools can be improved for maximum human benefit.

Suleyman described how users of Pi feel liberated to ask it every question under the sun without shame or embarrassment. “It’s super inspiring,” Suleyman said. “It’s reliable and safe, and always there, and provides you with something that I think levels you up and enables you to be your best self in the real world with your friends, your family, and your colleagues.”

Today, about 22 percent of adults have used a mental health chatbot. One day, most of us may carry a support lifeline in our pockets, available 24/7 for a personalized boost whenever we need it.

That’s a world I’m proud to help build.

Thank you to Kira Peikoff for additional research and reporting on this article. I’m the head of Leaps by Bayer, the impact investment arm of Bayer AG. We invest in potentially breakthrough technologies to overcome ten of humanity’s greatest challenges, which we call “Leaps,” including to protect brain and mind and to transform health with data.

Read the full article here

Share this Article
Leave a comment