Woebot, as Karen Brown wrote this week in Science Times, is an AI-powered smartphone app that aims to provide low-cost counseling, using dialogue to guide users through the basic techniques of cognitive-behavioral therapy. But many psychologists doubt whether an AI algorithm can ever express the kind of empathy required to make interpersonal therapy work.
“These apps really shortchange the essential ingredient that — mounds of evidence show — is what helps in therapy, which is the therapeutic relationship,” Linda Michaels, a Chicago-based therapist who is co-chair of the Psychotherapy Action Network, a professional group, told The Times. Empathy, of course, is a two-way street, and we humans don’t exhibit a whole lot more of it for bots than bots do for us. Numerous studies have found that when people are placed in a situation where they can cooperate with a benevolent AI, they are less likely to do so than if the bot were an actual person. “There seems to be something missing regarding reciprocity,” Ophelia Deroy, a philosopher at Ludwig Maximilian University, in Munich, told me. “We basically would treat a perfect stranger better than AI”
In a recent study, Dr. Deroy and her neuroscientist colleagues set out to understand why that is. The researchers paired human subjects with unseen partners, sometimes human and sometimes AI; each pair then played a series of classic economic games — Trust, Prisoner’s Dilemma, Chicken and Stag Hunt, as well as one they created called Reciprocity — designed to gauge and reward cooperativeness.
Our lack of reciprocity toward AI is commonly assumed to reflect a lack of trust. It’s hyper-rational and unfeeling, after all, surely just out for itself, unlikely to cooperate, so why should we? Dr. Deroy and her colleagues reached a different and perhaps less comforting conclusion. Their study found that people were less likely to cooperate with a bot even when the bot was keen to cooperate. It’s not that we don’t trust the bot, it’s that we do: The bot is guaranteed benevolent, a capital-S sucker, so we exploit it.
That conclusion was borne out by conversations afterward with the study’s participants. “Not only did they tend to not reciprocate the cooperative intentions of the artificial agents,” Dr. Deroy said, “but when they basically betrayed the trust of the bot, they didn’t report guilt, whereas with humans they did.” She added, “You can just ignore the bot and there is no feeling that you have broken any mutual obligation.”
This could have real-world implications. When we think about AI, we tend to think about the Alexas and Siris of our future world, with whom we might form some sort of faux-intimate relationship. But most of our interactions will be one-time, often wordless encounters. Imagine driving on the highway, and a car wants to merge in front of you. If you notice that the car is driverless, you’ll be far less likely to let it in. And if the AI doesn’t account for your bad behavior, an accident could ensue.
“What sustains cooperation in society at any scale is the establishment of certain norms,” Dr. Deroy said. “The social function of guilt is exactly to make people follow social norms that lead them to make compromises, to cooperate with others. And we have not evolved to have social or moral norms for non-sentient creatures and bots.”
The true Turing test is road rage. When a self-driving car starts honking wildly from behind because you cut it off, you’ll know humanity has reached the pinnacle of achievement. By then, hopefully, AI therapy will be sophisticated enough to help driverless cars solve their anger-management issues.
The writer is a journalist with NYT©2021
The New York Times