Wednesday, June 26, 2019

The Apps That Read Minds

It’s intriguing, to say the least. We now have created machines that can read your mind. They need but take a brief glance at your texts or your Instagram posts and they can tell whether you are depressed or suicidal. 

When it comes to diagnosing mental health problems, physicians, David Brooks remarks, are fallible:

Primary care physicians can be mediocre at recognizing if a patient is depressed, or at predicting who is about to become depressed. Many people contemplate suicide, but it is very hard to tell who is really serious about it. Most people don’t seek treatment until their illness is well advanced.

Using A.I., researchers can make better predictions about who is going to get depressed next week, and who is going to try to kill themselves.

And then what? Did anyone ask whether these patients want AI to read their intentions and behaviors? Will they be happy to hear that an app has invaded their privacy and has consigned them to psychiatric treatment? And besides, what treatments are currently on offer?

To state the obvious, depressed patients commonly feel disconnected from other human beings. The cure for disconnection is obviously connection. I am not saying this to insult your intelligence, but because the thought has never crossed the minds of most of the therapy world. After all, connection involves conversational exchange, person to person, preferably in person or at least over Skype or the telephone. Talking to a recording advice is hardly an adequate substitute. 

When our new techno gadgets can read your mind by analyzing the grammatical structure of your sentences you are relieved of the need to communicate and to connect. The more you talk with another person the more you might see that you do not need to be depressed. The more you talk the more you might discover ways to solve your problems. The apps only know that you have problems. They do not know how to help you to solve them.

If so, one of the lifelines to mental health has been eliminated. Is it not depressing to have an app read your mind?

It might be a good idea for psychiatrists to learn how to make diagnoses through conversation, rather than by running down a check list. As for primary care physicians they are barely qualified to diagnose depression or suicidal tendencies. 

Consider this:

On its website, the Crisis Text Line posts the words that people who are seriously considering suicide frequently use in their texts. A lot of them seem to be all-or-nothing words: “never,” “everything,” “anymore,” “always.”

This is not news. The mental health profession, especially its cognitive therapists, have long known about this. But, what will happen when prospective patients learn the cues. If they want to hide their intentions they might very well learn how to manipulate the machine.

And, among other obvious thoughts, when people learn that their communications to help lines are being monitored and analyzed by AI apps… will this make it more or less likely that they will call these help lines?

If they are suffering from disconnection, won’t they think that the prevalence of mind reading apps says that they need not communicate with anyone.

Or else, consider these observations, which are very likely true:

When people suffering from depression speak, the range and pitch of their voice tends to be lower. There are more pauses, starts and stops between words. People whose voice has a breathy quality are more likely to reattempt suicide. Machines can detect this stuff better than humans.

There are also visual patterns. Depressed people move their heads less often. Their smiles don’t last as long. One research team led by Andrew Reece and Christopher Danforth analyzed 43,950 Instagram photos from 166 people and recognized who was depressed with 70 percent accuracy, which is better than general practice doctors.

Not to be any more churlish than usual, we should how much of these conversational changes derive from the way an interviewer is conducting the interview. If a psychoanalyst is sitting back and saying nothing, that might well induce the patient to indulge more depressive speech patterns. If a therapist is more adept at engaging with a patient, such patterns might diminish.

Brooks finds this hopeful. But Brooks knows nothing about mental health issues, and ought, in the end, not to opine about them. Yes, I understand that writing a regular op-ed column for the Times makes you think that you are qualified to write about things you know nothing about. It's an occupational hazard.

But, AI is coming. It is coming to the mental health field. And not just to the mental health field. The chances for abuse are legion.

Brooks writes:

The upshot is that we are entering a world in which people we don’t know will be able to understand the most intimate details of our emotional life by observing the ways we communicate. You can imagine how problematic this could be if the information gets used by employers or the state.

But if it’s a matter of life and death, I suspect we’re going to go there. At some level we’re all strangers to ourselves. We’re all about to know ourselves a lot more deeply. You tell me if that’s good or bad.

Isn’t it depressing to think that people we don’t know will be able to read our minds, to discern facts that we might choose not to share? If you think that the AI developers will stop after they discover ways to see whether you are depressed or suicidal, you are hopelessly naive. Why would they not want to delve into some of your other secrets?

If it’s just about downloading an app, whatever makes you think that the state or your employer will not be rushing out to buy one? The more pervasive these apps become, the less inclined will people become to communicate... with anyone.

Of course, we can pretend that it’s a matter of life and death. It’s the all-or-nothing argument that people trot out when they want to persuade you to buy something you shouldn’t be buying.

Besides, the argument assumes that we know how to treat depression. By and large we are not very good at it. As noted on recent blog posts, SSRIs comport a significant suicide risk. And psychiatrists seem to diagnose and prescribe willy nilly.

Brooks ignores the issue. As for the notion that the app can teach us about ourselves, our interactions with other human beings teaches us as much, within the context of human relationships. The notion that an app is seeing inside your mind, thus that you have no more privacy, is likely to produce more, not less depression. 

5 comments:

  1. "On its website, the Crisis Text Line posts the words that people who are seriously considering suicide frequently use in their texts. A lot of them seem to be all-or-nothing words: “never,” “everything,” “anymore,” “always.” "

    This reminded me of Dietrich Doerner's work....he uses simulation exercises to understand behavior types which lead to failure in business and operations management. One of these exercises is the *fire simulation*, the subject plays the part of a fire chief who is dealing with forest fires. He has 12 brigades at his command, and can deploy them at will. The brigades can also be given limited autonomy to make their own decisions.

    The subjects who fail at this game, Doerner finds, are those who apply rigid, context-insensitive rules...such as "always keep the units widely deployed" or "always keep the units concentrated" rather than making these decisions flexibly. He identifies "methodism," which he defines as "the unthinking application of a sequence of actions we have once learned," as a key threat to effective decision-making. (The term is borrowed from Clausewitz.) Similar results are obtained in another simulation, in which the subject is put in charge of making production decisions in a clothing factory. In this case, the subjects are asked to think out loud as they develop their strategies. The unsuccessful ones tend to use unqualified expressions: constantly, every time, without exception, absolutely, etc...while the successful "factory managers" tend toward qualified expressions: now and then, in general, specifically, perhaps,...

    ReplyDelete
  2. Didn't the Soviets already do that? Force institutionalization of political adversaries.

    ReplyDelete
  3. So if you mostly avoid the internet does that mean this "AI" automatically classifies you as criminally insane for not volunteering your pictures or social information? Sounds like Palantir.

    ReplyDelete
  4. Hal, do you think I should automate my home and business and allow AI to make my life more comfortable? Why yes, Dave, I think that's a fine idea.

    ReplyDelete
  5. Ubu, you have set off my pseudo-random sensors.

    ReplyDelete