Sorry, Siri: Drawbacks to Algorithms Sounding Too Human

In online marketing, consumers more readily forgive algorithm errors — unless the algorithm pretends to be human

Based on the research of Raji Srinivasan

Sorry, Siri: Drawbacks to Algorithms Sounding Too Human sorry siri drawbacks to algorithms sounding too human img 661daf0c3f34f

In 2016, Microsoft introduced to Twitter an artificially intelligent chatbot named Tay. By interacting with users, Tay learned new vocabulary. Unfortunately, the bot was quickly targeted by trolls, who taught it to spout racist and sexist commentary. After 16 hours, Microsoft withdrew the chabot.

Tay was not an isolated incident, says Raji Srinivasan, professor of marketing at Texas McCombs. Tech giants including Facebook, Google, Apple, and Amazon have suffered black eyes from algorithms that offended or harmed users. In a 2017 survey, 78% of chief marketing officers said that one kind of algorithm error had hurt their brands: placing their online ads next to offensive content.

In new research, Srinivasan offers them some encouraging news: Consumers are faster to forgive a brand for an algorithm failure than for a human one.

“They assume the algorithm doesn’t know what it’s doing, and they respond less negatively to the brand,” she says.

But the news comes with a caution. Consumers become less tolerant when an algorithm tries to mimic a human being.

“When you’re talking to Alexa, it has a name and a voice. When it starts asking irritating questions, you’re likely to hold it more responsible for the mistakes it makes.” — Raji Srinivasan

Projecting Minds into Algorithms

In recent years, consumers have become more aware that unseen programs determine much of what they see online. “Every time you go to Facebook or Google, you’re not interacting with humans but with technology designed by humans,” Srinivasan says.

How do people react, she wondered, when that technology upsets them? The answer, she suspected, depends on whether they unconsciously view it as having a mind.

According to the psychological theory of mind perception, she says, “Humans tend to assign mind — more or less — to inanimate objects. By assigning more mind to objects, they assume that the inanimate objects have more agency and are capable of taking actions.”

If someone knows they’re dealing with a mindless algorithm, Srinivasan reasoned, they might be less inclined to blame it for a blooper.

To test the idea, she worked with Gülen Sarial Abi of Denmark’s Copenhagen Business School. In a series of 10 studies with a total of 2,999 participants, they presented examples of algorithm errors and measured participants’ responses.

In general, the researchers found, algorithms’ missteps did less damage to a company’s reputation than ones committed by people.

In one study, participants read about a fictitious investment firm that had made a costly mistake. Some were told the culprit was an algorithm, others that it was a person. Subjects then rated their attitudes about the brand on a scale from 0 to 7.

Those told about the algorithm error had more positive attitudes, giving the brand an average score of 4.55. Those who faulted humans gave a lower average rating: 3.63.

“They held the brand less responsible for the harm caused by the error.” — Raji Srinivasan

Being Too Human

While consumers were more forgiving of a nameless algorithm, they became less lenient when the algorithm had an identity.

In another experiment with the same fictitious financial firm, a third group of participants was shown a different cause for the error: a financial program that was given the name Charles. Anthropomorphizing the algorithm, the researchers found, made consumers less tolerant of its failures:

· On brand attitude, “Charles” scored 0.51 point lower than an anonymous algorithm.

· When asked to make a small donation to a hunger charity, participants told about “Charles” gave $1.60, compared with $2.05 for those exposed to a nameless algorithm, suggesting their “distaste” carried over even to unrelated behaviors.

Attitude ratings dropped as much for the mistakes of “Charles” as for human ones, Srinivasan notes.

“When you humanize the algorithm more, people assign greater blame to it.” — Raji Srinivasan

Accuse the Algorithm

To Srinivasan, the lesson is clear: When an embarrassing error occurs, “publicize the fact that it’s the fault of an algorithm and not a person.”

Consumers’ tolerance for algorithms extends to fixing mistakes as well as making them, she adds. In another experiment, participants preferred technological supervision of algorithms over human supervision for preventing future algorithm errors.

“If you’re using technological supervision, you should highlight it,” she says. “If you’re using humans, it may be wiser to not publicize it.”

A company takes a risk when it gives a program a personality and a name, like Apple’s Siri or Microsoft’s ill-fated Tay, Srinivasan says. In such cases, the company should increase its vigilance about preventing errors and make plans for damage control when they happen.

“As more companies are using anthropomorphized algorithms, they’re likely to have more negative responses from consumers when something goes wrong,” she says. “Be prepared for that and know how you’re going to handle it.”

“When Algorithms Fail: Consumers’ Responses to Brand Harm Crises Caused by Algorithm Errors” is online in advance in the Journal of Marketing.

Story by Steve Brooks