BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

Acclaimed 60 Minutes Showcases Hard Truths About AI For Mental Health

Following

In today’s column, I continue my ongoing series about the impact of generative AI in the health and medical realm. The focus this time is on an outstanding CBS 60 Minutes episode that aired on Sunday, April 7, 2024, and closely examined hard truths about the evolving considerations of AI being used for mental health therapy (see the link here to watch the video and see the transcript).

I am honored to indicate that I was included in the episode (see the officially excerpted portion at the link here).

My participation encompassed a lively interview with world-renowned Dr. Jonathan LaPook, CBS Chief Medical Correspondent and Professor of Medicine at the NYU School of Medicine, and the Mebane Professor of Gastroenterology at NYU Langone Health. The timely and important Season 56, Episode 27 segment was entitled “Your Chatbot Will See You Now” and produced by Andrew Wolff, associate producer, Tadd J. Lascari, broadcast associate, Grace Conley, and edited by Craig Crawford.

Rising Concerns About AI For Mental Health

Avid readers are vividly aware that I’ve been extensively analyzing the latest news, trends, and advances associated with generative AI for use in performing mental health advisement.

For example, I closely analyzed the emergence of mental health chatbots bolstered by generative AI (see the link here) and explored the rapidly changing nature of the client-therapist relationship due to generative AI at the link here. I explored where things are headed regarding the levels of AI-based mental therapy autonomous guidance at the link here, and showcased the importance of the World Health Organization (WHO) report on global health and generative AI at the link here, and so on.

One of my latest postings provided an up-to-date comprehensive overview of what’s happening throughout the full realm of AI for mental health, see the link here.

I’d like to briefly share with you some of the key elements of the 60 Minutes coverage and dovetail my additional thoughts on where things are heading in this rapidly evolving domain. The beauty of having the venerated 60 Minutes do a piece on this topic is that we need more eyes on something that is otherwise hidden in plain sight. Our world at large is engaging in a grand experiment of whether having 24x7 low-cost or nearly free AI-powered mental therapy advisement at our fingertips is going to be good for us in the existing unbridled manner in which it can be applied.

One nerve-wracking view is that the advent of generative AI has let the horse out of the barn when it comes to mental health therapy.

Here’s the deal. A dutifully postulated perspective is that we aren’t doing enough to figure out where this let-loose horse is going and that we might be heading in the most disconcerting of directions. Will people become enamored of generative AI therapeutic advice and forego human therapists? Does generative AI have sufficient checks and balances to ensure that dour or adverse advice is not dispensed? Who will be liable when generative AI misleads or “hallucinates” and gives outlandish guidance that a person might unknowingly take as bona fide? Etc.

This is an eyebrow-raising serious topic that ought to be on the minds of everyone, especially policymakers, healthcare providers, mental health professionals, psychologists and psychiatrists, lawmakers, regulators, business leaders, and the public at large.

Let’s dig into the considerations at hand.

The Show Of Shows

After initial opening remarks by Dr. Jon LaPook, the 60 Minutes segment took a close look at a firm well-known in the mental health chatbot arena called Woebot Health. Perhaps you’ve heard of or maybe made use of their popular Woebot app that previously was available on a free access downloadable basis for the public at large.

As per the official website, the availability is now this:

  • “Woebot is only available to new users in the United States who are part of a study or who have an access code from their provider, employer or other Woebot Health partner. We have found that people have the best experience when Woebot is delivered under the supervision of a healthcare provider, so we are partnering with health plans and health systems to make Woebot available to the people they serve.” (Woebot Health FAQ webpage).

The Woebot app has garnered quite an impressive record of substantial downloads and is soundly backed by rigorous research, especially in the clinical psychology space of cognitive behavioral therapy (CBT). Typically ranked in the top set of digital therapeutic apps ratings, Woebot has stridently earned a reputation for being carefully developed, rigorously updated, providing behavioral health engagement and promoting preventive care that is AI-powered.

Dr. Alison Darcy is the founder and president of Woebot Health. Dr. Dracy was interviewed during the 60 Minutes segment and discussed the nature of the Woebot app and how it was formulated along with the process for ongoing system maintenance and enhancement. The infusion of AI techniques and technologies is a hallmark of the Woebot app. The chairman of the Woebot Health board is the esteemed Dr. Andrew Ng, a luminary in the AI field who has served in many notable capacities such as being the Chief Scientist at Baidu and the founding lead of Google Brain.

This foundational coverage about AI-enabled chatbots for mental health poignantly set the stage for a very important and demonstrative change that is occurring in the marketplace today. The phenomenal change is something that few realize is occurring, even though it is happening in plain sight. Only versed insiders are typically aware of the massive uprooting taking place.

Let’s discuss the matter.

Old AI And New AI Are The Centerpiece Of A Marketplace Earthquake

A pivotal point in the segment entailed introducing the concept that there is a crucial disruptive shift occurring in the AI-based mental health apps realm. I liken the shift to something that has taken place in other fields where advances in technology have caused a shakeup in an entire marketplace. You can generally think of circumstances such as the emergence of Uber which had adopted smart tech and disrupted conventional cab services, or hotels finding themselves faced with a new spunky entrant such as Airbnb that turned everyday houses, apartments, and spare rooms into a major competitor via the ease of high-tech online booking capabilities. There will be a twist involved in this context, hang in there.

To explain how a similar disruption is occurring in the mental health chatbot space, let’s consider what is happening under the hood regarding the AI that is built into these apps.

First, you are undoubtedly familiar with the use of Siri or Alexa. They employ AI capabilities consisting of natural language processing (NLP). Their prevailing basic formulation consists of “old fashioned” AI and NLP that has been around for a number of years (some liken this to a catchy moniker, GOFAI for good old-fashioned AI). The tech is considered tried and true. An emphasis is placed on ensuring that Siri or Alexa does not respond in outlandishly zany ways, such as making recommendations to you that might seem bizarrely incorrect or oddly out of sorts.

Into the AI NLP arena stepped generative AI, particularly the release of and subsequently global adoption of ChatGPT starting in November 2022. You’ve certainly used today’s modern-day generative AI by now, or at least know of its amazing fluency based on large-scale computational pattern-matching.

Interactions with generative AI apps can knock your socks off with their conversational smooth flow. This is a stark contrast to the old style of NLP. When you use Siri or Alex, you often find yourself restricting your vocabulary in an irksome effort to get the stilted NLP to figure out your commands (the makers of Siri and Alexa realize this frustration exists and are moving heaven and earth to include generative AI).

Okay, so we’ve got the older AI and the recently emerged newer AI that is based on large language models (LLMs) and generative AI. The assumption you might make is that every app maker that is using AI NLP ought to summarily drop the old ways and leap into the new ways. Problem solved. You can pat yourself on the back and move on to other pressing problems in the world.

Sorry, but present-day life never seems to be that easy.

The rub is this.

If you switch over to generative AI, you immediately are confronted with the strong possibility that the AI is going to make errors or do what some refer to as encountering an AI hallucination. I disfavor the reference to “hallucinations” since it tends to anthropomorphize AI, see my analysis of so-called AI hallucinations and what is being done about them, at the link here and the link here, just to name a few. Anyway, the catchphrase of AI hallucinations has caught on and we are stuck with it. The essence is that generative AI seemingly makes up stuff, producing fictitious-laden responses that can mislead you into believing that they are ironclad facts and truths.

Making use of AI that periodically tricks you might be okay in some contexts where the narrative responses aren’t particularly life-altering. But in the case of mental health advisement, the wrong wording can be a huge problem that adversely affects someone’s mental well-being. Words genuinely do matter when it comes to mental health therapy.

I often bring up this famous quote by Sigmund Freud to emphasize the importance of words in a mental health context:

  • “Words have a magical power. They can bring either the greatest happiness or deepest despair; they can transfer knowledge from teacher to student; words enable the orator to sway his [her] audience and dictate its decisions. Words are capable of arousing the strongest emotions and prompting all men's [women’s] actions.”

Another key component that arises is that the prior big-time era of AI had extensively made use of rules-based processing (think of the time when the mainstay was rules-based systems, aka expert systems, knowledge-based systems, etc.). The handy aspect of rules-based systems is that you can devise a clear-cut series of rules that are acted upon by an app. You can copiously write the rules, exhaustively test the rules, and feel relatively confident that whatever the app might end up doing is pretty much safely predictable.

Again, this predictability immensely matters in a mental health context. The advisement being conveyed to a person using such an app has got to be dependable and not go outside of acceptable bounds. Rules-based approaches generally get you that kind of steadfastness. In the case of generative AI, all bets are off. One moment the generative AI is emitting responses that are abundantly fair and square, and the next moment the conversational capacity jumps the shark.

Parlance in the AI field is to indicate that rules-based processing is deterministic, while generative AI is considered non-deterministic. The crucial essence of generative AI is that it exploits probabilities and statistics in composing responses. Thus, you get what seems to be an entirely new response each time you ask a question, almost like spinning a roulette wheel. Rules-based processing, in contrast, will conventionally arrive at the same or similar answer each time asked.

Conundrum Of A Grand Difficulty

I have now step-by-step primed you to consider the classic dilemma of being between a rock and a hard place.

It goes like this.

Suppose you have an AI-based mental health chatbot that is doing superbly in the mental health apps space. You mindfully made use of the tried-and-true ways of “older” AI such as NLP and rules-based processing. Gobs and gobs of hours went into devising the AI. The app is known for being dependable. Rarely does anything go awry.

Along comes generative AI which is inextricably accompanied by the endangering downsides of inherently being able to generate errors or AI hallucinations. If you make the changeover, you are increasing your risks manyfold. Chances are that your steadfast reputation will be tarnished the moment that the generative AI goes off the rails in making mental health recommendations to someone. Users are at risk. Your firm is at risk. The momentum that you’ve been so cautious about in years-long building could be undermined at any time and irreparably doom your firm to being utterly trashed in the public eye.

On the other hand, if you don’t leap into the adoption of generative AI, your AI mental health chatbot is going to look old-fashioned and not imbue the fluency that we all nowadays expect to experience in apps. People will flock to generative AI-based mental health apps instead of yours. They don’t realize that there is a solid chance of getting foul advice about their mental health. All they see is that one such app is stilted in its interaction, and the other seems almost like chatting with a fluent (faked or simulated) therapist at the touch of a button.

There are now lots and lots of proclaimed mental health chatbots that are using generative AI and getting away with doing so without a care in the world, see my coverage at the link here. Someone working in their pajamas churns out a self-declared mental health chatbot based on generative AI and offhandedly slaps on a warning that informs users to be cautious in what the chatbot tells them. Will this be sufficient to protect them from legal liability? I’ve said many times that it is a ticking legal timebomb and we don’t know yet how this will play out, see my analysis at the link here.

The marketplace is being flooded with these generative AI so-called mental health chatbots.

Many of them are available for free. If your carefully crafted, time-tested, rigorous rules-based mental health chatbot is in that same swimming pool, the question arises as to whether or once those bad apples spoil the barrel, might your studious app get smeared with all the rest. Reputationally sullied by the new entrants and their devil-may-care attitude.

I dare say you wouldn’t want to be associated with those muddled and ready-to-disintegrate apps. Your best bet would be to restrict the use of your app so that it is part of the bona fide mental health realm and not hopefully going to be perceived as part of the wanton riffraff that has done little if anything to materially devise a properly advising mental health app.

At the same time, you would indubitably be burning the midnight oil to see if you can tame generative AI sufficiently to bind it into your mental health chatbot. The goal would be to try and attain the best of both worlds. Have a mental health chatbot that has the rigors of a rules-based processing approach, while simultaneously exhibiting the awesome fluency of generative AI.

Explaining What Happens When Wiring Gets Crossed

Getting back to the 60 Minutes episode, the piece proceeded to go into detail about a now-famous case study of what can go wrong if the stringent use of a rules-based approach is haphazardly seemingly combined with generative AI. This involves a chatbot known as Tessa, here’s my coverage and in-depth assessment of what occurred, see the link here.

I’ll briefly bring you up to speed.

Last year in 2023, around mid-year, an eating disorder chatbot lamentedly ended up dispensing untoward advice and the mass media and social media went viral with the story.

The quite sad aspect is that the rules-based processing had been doing a stellar job and was strenuously devised after much careful research. In almost an instant, the apparent infusion of generative AI by the hosting party appeared to lead the chatbot astray and users reported it as such.

The 60 Minutes piece included an interview with Dr. Ellen Fitzsimmons-Craft, a psychologist specializing in eating disorders at Washington University School of Medicine in St. Louis who was a leader in mindfully devising Tessa. The apparent infusion of generative AI was done without the knowledge of the researchers and sparked a conflagration in the news cycle that necessitated taking down the chatbot.

Other makers of AI-based mental health chatbots certainly witnessed the forest fire that ensued online about the situation. If you were on the fence about adding generative AI to your mental health chatbot, this was a humongous wake-up call. You had better be right if you take that dicey step, or else you might launch yourself off a cliff and into an abysmal abyss.

Darned if you do, darned if you don’t.

The ordinary news cycle loves these kinds of heartburns.

Here’s why.

Generative AI has become the darling of media storytelling. By and large, tales splendidly recount how great generative AI is. You might say this has become the ho-hum dog-bites-person story, namely an everyday tale that doesn’t garner much attention. The news hounds are always on the lookout for a person-bites-dog counterexample. Those narratives tend to go against the tide and will grab eyeballs aplenty. An example would be the two attorneys in New York who opted to use ChatGPT to do their legal research and got themselves into hot water with a judge when they formally filed legal cases that were AI hallucinations, see my coverage at the link here. This became outsized news at the time.

A question arises as to whether combining rules-based processing and generative AI is akin to the proverbial worries about combining matter and anti-matter. Conventional advice would seem to say don’t do it. Or maybe it is like the Ghostbusters universe's prominent warning to not cross the streams. It would be bad, very bad. We are supposed to imagine that all life as we know it comes to an abrupt halt, instantaneously, and that every blessed molecule in your body explodes at the speed of light.

Let’s give this some reasoned reflective thought and see what might be done in the case of AI.

Extra Innings On These Vital Matters

Time to enter into extra innings.

That’s a bit cheeky. Avid devotees of 60 Minutes are likely familiar with their Overtime pieces. I’m going to somewhat parlay that notion into a semblance of “extra innings” here in my discussion. I mean to say that I am now going beyond the excellent episode and will provide some additional thoughts that might be of further insight to those deeply ingrained in this topic.

Please settle into a comfy chair and grab yourself a glass of fine wine or your favorite beer.

One of the most frequent questions I get asked when speaking at conferences and events is that much of this sounds like a litany of problems and that a dour view would be that no solutions are within sight. I am a bit more optimistic and believe there is light at the end of the tunnel. That’s the bottle is half full, while others often grumble that the bottle is half empty.

You judge.

I see this as consisting of two mainstay hard problems that need to be resolved:

  • (1) Tame. Tame generative AI and LLMs to be more reliable and less risky.
  • (2) Immerse. Craft generative AI and LLMs that are more fully steeped in the specific domain of mental health therapy and advisement.

I’ve covered these problems and their respective potential solutions at length in my two books on the topic of AI and mental health, see the details at the link here and the link here.

I will provide a brief sketch here.

Taming The Beast About Those Dreaded AI Hallucinations

Let’s see how taming generative AI is being tackled.

These are active pursuits regarding coping with AI hallucinations:

  • (i) Reduction. Reduce the chances of AI hallucinations arising.
  • (ii) Constraints. Seek to constrain AI hallucinations to inconsequential concerns.
  • (iii) Catch. Catch AI hallucinations internally, alert the user, or make corrections.
  • (iv) Detect. Detect AI hallucinations via external use of another generative AI and resolve.
  • (v) Neuro-Symbolic. Blend rules-based processing with generative AI in a synergistic fashion (hybrid AI, aka a combination of symbolic and subsymbolic into neuro-symbolic AI).
  • (vi) Trust Layer. Deploy trust layers to surround generative AI and protect us from AI hallucinations.
  • Etc.

I will swing the bat and drive home key highlights about each of these approaches.

The primary aim of most AI hallucination “wrangling” research right now is my first point above, consisting of the aim to radically reduce the chances that generative AI will get entangled in any AI hallucinations at the get-go.

There might be identifiable patterns in how AI hallucinations tend to arise. If so, we can craft means to prevent or mitigate when those patterns occur. Various research studies report that AI hallucinations might be spurred based upon dependent factors such as how the generative AI was devised (not all generative AI apps are the same), the topics in which AI hallucinations might more readily appear, the prompts that might stoke the odds of AI hallucinations, and so on. Perhaps closely studying the matter will bring forth insights into ways to stifle these outcomes.

A prevailing assumption is that no matter how far we might advance on this, the odds of incurring AI hallucinations won’t be zero. There will still be a non-zero chance. That is admittedly bad news. The good news is that maybe an AI hallucination will happen so rarely that it becomes like seeing a once-in-a-lifetime occurrence. For more on the stated inevitably of AI hallucinations, see my discussion at the link here.

A simultaneous research pursuit involves trying to make sure that when an AI hallucination occurs it is somewhat trivial or inconsequential. The idea is that if we cannot rid ourselves of them, maybe we can ensure they don’t much matter when they arise. We might be able to constrain AI hallucinations such that they make something less readable but don’t change the true meaning of the response. Live and let live, one might say.

Another angle is that we can have generative AI try to self-police itself. An AI hallucination might arise, and the generative AI will internally flag that this has occurred. The user might be alerted. The generative AI might also be able to correct the AI hallucination. One issue is that having generative AI double-check itself can be messy, so an alternative would be to have a second and completely different generative AI standing by to double-check the initiating generative AI. See my discussion on these at the link here.

Those solutions are acting independently of trying to blend or mix rules-based processing with generative AI. The proposed approaches are focused solely on how to get generative AI to be less leaned into AI hallucinations. We might be able to use rules-based processing to double-check generative AI. We might be able to use rules-based processing to work synergistically with generative AI. This line of research is typically referred to as hybrid AI or neuro-symbolic AI, see my coverage at the link here.

Finally, yet another approach involves surrounding generative AI with a set of components known as a trust layer, see my assessment at the link here. The idea is that we would put a pre-processor at the front of generative AI to review and pass along suitable prompts and would have a post-processor that examines the generated response and ascertains whether to show the reply to the user or take other corrective action. I have predicted that we are going to soon see rapid growth in the development and deployment of these trust layers.

If any or all those methods and technologies can deal sufficiently with generative AI hallucinations, this would allow generative AI to be more readily employed in mental health chatbots. You could get the desirable heightened fluency without the dire risks of the AI giving disastrous advice.

At that juncture, the focus could be directed toward making generative AI as deeply data-trained on mental health therapy and advisement as we can push the technology to represent.

Data Training Generative AI On Being Steeped In The Mental Health Domain

Speaking of making sure that generative AI is data trained on mental health therapy and advisement, let’s discuss that equally important topic. We can pursue the AI hallucinations eradication and at the same time be avidly pursuing the domain data training on mental health facets. They can proceed at the same time.

Here is a quick rundown of the major paths being explored:

  • (i) Remain generic. Generative AI is further broadly data-trained on mental health matters and not focused on this as a core specialty per se.
  • (ii) Advanced prompting. Generative AI can be pushed toward mental health therapy advisement using advanced prompting approaches.
  • (iii) Transcripts of therapeutic sessions. Use therapist-client therapeutic transcripts to data-train generative AI accordingly.
  • (iv) Ingest via RAG. Utilize the in-context modeling capabilities of generative AI and ingest mental health domain data via RAG (retrieval augmented generation).
  • (v) Build from scratch. Start anew when building an LLM and generative AI by having mental health therapy as a foundational core to the AI.
  • (vi) Other approaches.

I’ve covered these variously in my writings and won’t go into the technical details here.

The emphasis is that most of today’s generative AI mental health chatbots are reliant on generic generative AI. When you use such a chatbot, the generative AI has been marginally data-trained on the specifics of mental health. A common saying is that it is a mile long and an inch deep.

A better avenue would be to go for a highly steeped domain-customized generative AI that has been purposefully inclined into mental health therapy and advisement as a fundamental capacity. Users would be able to tap into the capability far beyond the usual surface-level computational pattern-matching on the breadth-oriented approach that is taking place today. You see, most generative AI is currently devised based on a jack-of-all-trades dogma and ostensibly is an expert in none.

The pot at the end of the rainbow would be a fully steeped generative AI that is deeply data-trained in mental health therapy and advisement. Of course, we would still need to cope with AI hallucinations, ergo, the two go hand-in-hand. The overall goal is to do the heavy immersion of generative AI and at the same time seek to reduce, constrain, or detect-correct those imperiling AI hallucinations.

It’s a twofer.

Conclusion

Quick recap and final comments.

I earlier mentioned that existing AI-based mental health chatbots that are based on rigorous efforts are for the moment stuck between a rock and a hard place. They can’t readily adopt generative AI into their wares just yet. The troubles of having AI-powered therapeutic advice go awry due to generative AI would undermine their reputation, destroy their marketplace positioning, and potentially imperil the users of their wares.

They have been appropriately and diligently playing by the rules, good for them. Their valiant efforts are laudable to have crafted, tested, and sought to ensure that their AI mental health chatbots are tightly controlled. The disruption they currently face is a stomach churner. They are in a sense unfairly competing against Wild West out-of-control generative AI mental health chatbots.

You are now in the know.

Thanks go to 60 Minutes for bringing this burgeoning and significant issue to the forefront of public attention. Their reputation for hard-hitting investigative reporting is legendary. This latest coverage vibrantly showcases that they still have the magic touch and are willing and able to cover what needs to be covered.

Follow me on Twitter