Tuesday, April 11, 2023

Artificial Therapy

In Saturday’s Miscellany (on this blog and on my Substack) I quoted a woman named Kat Woods, to the effect that she preferred AI driven therapy to the real thing. 

One is not surprised to see that the current rage over AI has reached the therapy world. We can say, to introduce our own shorthand, that AI can provide artificial therapy. It can provide it for those who cannot or do not want to consult with a licensed, credentialed professional.


Now, we read a new article about artificial therapy, by Kate Farmer, via Real Clear Wire. (via Maggie’s Farm) It tells us that various forms of artificial therapy have been around for quite some time now, and that the new AI has merely upped their game.


By the terms of this article, the great agony in the therapy profession is that artificial therapy is unlicensed. And, God knows what those machines are spewing out. 


But this assumes that real therapy, practiced by licensed professionals is notably effective. It is not. Besides, there are dozens of different variations on what constitutes therapy. To imagine that they are all equally effective is overly optimistic. If America’s young people are beset with mental health problems, that tells us that therapy does not do a very good job, even when it’s practitioners are licensed.


Now, Kate Farmer writes like a therapist. She seems to be more concerned with protecting her market share than with competing against artificial therapy:


Patients frustrated by long wait times and high prices in mental healthcare are increasingly turning to AI apps, websites, and chatbots for therapy. But despite their novelty, consumers should be wary. These young AI systems lack the regulatory oversight essential to ensure their safety, which can put vulnerable users at risk.


And yet, artificial therapy is unregulated. God forbid. What we all need is increased government regulation. We recall wistfully that Kat Woods claimed that therapists and coaches were less effective than her artificial therapist.


Still, professionals want to control their market:


But this unique “non-professional” space occupied by AI therapy apps and websites is almost entirely unregulated. AI therapy services, even highly sophisticated sites like Woebot, are still classified by the FDA as “general wellness” products — the regulatory category for “low risk products that promote a healthy lifestyle.” General wellness products are not subject to oversight in the way foods, cosmetics, or medical care are. They are held only to a set of vague “nonbinding recommendations” published by the FDA for suggested use (and that haven’t been updated since 2019). As long as AI therapy sites continue to disclaim the ability to treat specific conditions like anorexia or anxiety disorders, they are allowed to allege certain mental health benefits without verification from the FDA. 


As for the problems that artificial therapy produces, here are some. The article never says whether or not real licensed professionals ever give out bad advice:


At their worst, however, they can accidentally give out harmful advice — errors that can be life-threatening when issued to vulnerable users. High-profile AI therapy companies are aware of these blunders: some, like Woebot, have even issued press releases condemning other AI therapy models, while continuing to defend their own. 


And, of course, if the FDA approves that means that science has spoken. In a field as fluid as therapy, and given the track record of government agencies, one feels less than reassured. One wonders whether the FDA has approved every variety of psycho treatment.


Traditional FDA approval requires sufficient, valid scientific evidence assuring a product’s safety and efficacy — but no such requirements apply for GWPs. Without this oversight, AI therapy providers can make unsubstantiated claims about their model’s safety and benefits without repercussions.


Like it or not, artificial therapy is the competition. You can see the therapist world sweating already:


When actually used for “general wellness” — not for serious conditions, and certainly not for crisis episodes — they have the potential to offer unique, on-demand benefits that a regular therapist cannot. Their high degree of confidentiality, ease of access, zero wait times, and extremely low cost can make AI therapy sites a decent option down the line. However, until better user protections are in place, we cannot fully trust AI therapy systems to provide safe and effective care.


Subscribe to my Substack, for free or for a fee.

3 comments:

  1. A few basic questions:
    Who takes accountability when the AI gives harmful advice?
    Is the AI Licensed and Credentialed?
    Who monitors the "work product"
    of the AI? The State?
    Who programs the AI? The State?
    Purple haired creatures from Silly Cone Valley?
    What does the AI view as " mental health"?
    When the internet is down (EMP attack?) or service unavailable, what do "AI patients" do to cope?

    ReplyDelete
  2. Ayatollah RockinrollahApril 12, 2023 at 4:13 AM

    ELIZA (natural language Rogerian therapy AI - 1967) rises from the grave.

    https://en.wikipedia.org/wiki/ELIZA

    ReplyDelete
  3. I wonder if they download the DSM into the AI therapists' databases? If so, there's a really simple way Kate Farmer and her ilk could make sure the FDA regulates or even bans AI therapy immediately. All they have to do is market an AI therapist that's programmed with the pre-1974 DSM-II. Watch the heads explode and the regulations fly!

    ReplyDelete