And Why It Matters for Business

by Gizem Yalcin Williams and Stefano Puntoni
Illustration by Kacper Kiec

If you ever took a marketing course, you may remember the famous case from the 1950s about General Mills’ launch of Betty Crocker cake mixes, which called for simply adding water, mixing, and baking. Despite the product’s excellent performance, sales were initially disappointing. That was puzzling until managers figured out the problem: The mix made baking too easy, and buyers felt they were somehow cheating when they used it. On the basis of that insight, the company removed egg powder from the ingredients and asked customers to crack an egg and beat it into the mix. That small change made those bakers feel better about themselves and so boosted sales. Today, 70 years later, most cake mixes still require users to add an egg. 

We can take a lesson from that story today. As companies increasingly embrace automated products and services, they need to understand how those things make their customers feel about themselves. To date, however, managers and academics have usually focused on something quite different: understanding what customers think about those things. Researchers have been studying, for example, whether people prefer artificial intelligence over humans (they don’t), how moral or fair AI is perceived to be (not very), and the tasks for which people are likely to resist the adoption of automation (those that are less quantifiable and more open to interpretation). 

All that is important to consider. But now that people are starting to interact frequently and meaningfully with AI and automated technologies, both at and outside work, it’s time to focus on the emotions those technologies evoke. That subject is psychological terra incognita, and exploring it will be critical for businesses, because it affects a wide range of success factors, including sales, customer loyalty, word-of-mouth referrals, employee satisfaction, and work performance. 

We have been studying people’s reactions to autonomous technology and the psychological barriers to adopting it for more than seven years. In this article, drawing on recent research from our lab and reviewing real-life examples, we look at the psychological effects we’ve observed in three areas that have important ramifications for managerial decision-making: (1) services and business-process design, (2) product design, and (3) communication. After surveying the research and examples, we offer some practical guidance for how best to use AI-driven and automated technologies to serve customers, support employees, and advance the interests of organizations. 

Services and Business-Process Design 

Today, AI and automated technologies are embedded in a wide range of services and business processes that directly or indirectly affect consumers and employees. Upstart, for example, uses AI to decide which applicants to lend to, and Monster and Unilever use it to assess job candidates’ potential. GEICO’s DriveEasy program uses it to evaluate customers’ driving skills and determine car-insurance premiums, while IBM and Lattice help businesses adopt AI-based performance-feedback processes, which have an impact on promotion and layoff decisions. 

Given this trend, we need to ask: How do people react to decisions and feedback from AI and automated technologies? And how can businesses best incorporate them into their services and business processes to maximize customer and employee satisfaction? 

Let’s start with the first question. Together with Sarah Lim of the University of Illinois Urbana-Champaign and Stijn M.J. van Osselaer of Cornell University, we’ve recently examined situations in which the applications that people made to companies (perhaps for a loan or some benefits) were either accepted or rejected. In 10 studies, which involved a total of more than 5,000 participants, we found that in the case of acceptance, they reacted differently to decisions made by AI than to those made by humans. 

Their reactions were psychologically revealing: Study participants whose requests were granted by a person felt more joy than did those whose requests were granted by AI, even though the outcome was identical. Why? Because the latter felt reduced to a number and thought they couldn’t take as much credit for their success. When their requests were turned down, however, participants felt the same way whether the rejection was by a person or by AI. In both cases, and to the same degree, they tended to blame the decision-maker for their failure rather than themselves. 

In short, when delivering good news about decisions and evaluations, companies can generate more-positive reactions among customers and employees if they rely on humans rather than on AI—but that effect disappears when they deliver bad news. 

Let’s now turn to our second question: How can businesses integrate AI into their services and business processes to maximize customer and employee satisfaction? Our experimental findings offer some suggestions. 

Gizem Yalcin Williams, assistant professor of marketing, specializes in consumer behavior and studies why people behave, think, and feel in the ways that they do. Early in her research career, Williams was intrigued by the intersection of consumer psychology and artificial intelligence. “A majority of us are not knowledgeable about the technical intricacies of these AI technologies, so people often form opinions and react based on what they think AI does,” she says. “This realization encouraged me to study the psychology surrounding AI.” This article, first published in the Harvard Business Review, has appeared in more than a half-dozen international editions, including in her native Turkey. 

First, when AI or automated technologies are adopted for the purposes of evaluation and feedback, we recommend having some active human involvement in those processes and making that involvement clear to customers or employees. In one of our studies, we assessed how people rate a company when a human is only passively involved in evaluations (perhaps just monitoring algorithmic decisions). We compared that condition with one in which a human is in charge of the evaluation process and one in which just an algorithm is, and we found that participants reacted positively only when human involvement was active. 

Second, we recommend that managers be selective about the degree to which they rely on their (expensive) human workforce for decision-making. Because people tend to react the same way to negative news, whether it comes from a person or from AI, companies may not need the “human touch” to deliver it—even though that contradicts traditional managerial thinking. They should, however, consider using humans as often as possible to deliver good news. 

Product Design 

AI technologies and advanced automated features are integrated in many products and are transforming how we accomplish a variety of tasks in our personal lives: iRobot’s Roomba cleans your floors, Tesla’s Autopilot lets you enjoy the ride, Jura’s fully automatic coffee machine prepares your coffee from bean to cup and even cleans itself. Increasingly, too, people are working with AI-driven applications on the job. IBM’s Watson teams up with employees at many companies on a wide range of business tasks, including financial estimates and the management of marketing communication strategies; Adobe’s AI empowers designers and enhances their creative expression with Photoshop and other applications; and workers at Toyota operate highly automated tools and machinery. The recent advent of large language models and generative AI, such as OpenAI’s DALL-E and ChatGPT, is likely to accelerate these trends. How will our interactions with all these automated technologies influence our sense of identity and accomplishment? And how will that influence the demand for products? 

Our lab has explored how people react to automated products in the context of identity-based consumption, which helps people define who they are. Stefano worked on that project with Eugina Leung of Tulane University and Gabriele Paolacci of Erasmus University Rotterdam. In six studies and across various product categories, they found that people who identify with a particular activity, such as fishing, cooking, or driving, may experience automation as a threat to their identity, leading to reduced product adoption and lower product approval. 

Automated technologies are changing not only product and labor markets but also how the people using those technologies feel about themselves.  

Gizem Yalcin Williams

Findings from this project indicate that when people identify with a certain product category, they sometimes resist any technological enhancement of those products. When that’s the case, what should businesses do?  

First, we recommend that companies refrain from targeting identity-motivated consumers with fully automated products, and that when they do target such consumers, they focus on features or tasks that allow users to feel proud and involved. Consider the case of a bicycle-component manufacturer we worked with. Sometime earlier, the company had introduced an expensive automatic gear-shifting device in the European market and had targeted cycling enthusiasts, who are more willing to pay for mechanical gadgets. But those consumers showed little interest in the device, because they felt that it would eliminate a central part of the cycling experience for them. If the company had marketed to commuters or casual bikers or had designed the feature in a way that gave riders a feeling of more control, it might have had greater success. 

Second, we recommend that companies conduct market research to assess the extent to which automation risks triggering an identity threat. 

Communication 

With the adoption of AI and automated technologies, as with so much else, communication matters. In our research we’ve discovered three important ways that companies can optimize their communication strategies to minimize the risk of resistance or backlash. 

First, companies that use AI interfaces to communicate with customers or employees should consider humanizing those interfaces. This is particularly important, we’ve found, in business processes that involve evaluation and decision-making. In one of our studies we tested whether adding humanlike features to AI would lead people to internalize positive news and rate the company more favorably. When we gave the AI a name (Sam), added an avatar, and made its interaction with people more conversational, they responded much as they would to a human employee. For companies that cannot employ humans for various reasons—such as a high volume of requests, limitations on time, or computational restrictions—this finding suggests that simply humanizing their AI might mitigate less-positive reactions to feedback or news from it. 

Automated technologies are changing not only product and labor markets but also how the people using those technologies feel about themselves. Increasingly, companies will need to overcome psychological barriers by strategically designing their business processes and products to take human feelings into account and by employing well-thought-out communication strategies. In some cases, automation may introduce the risk of reduced employee commitment or customer satisfaction, and companies will need to weigh its benefits against that risk. In such situations the appropriate question when considering a move to AI and automation is not “Can we?” but “Should we?” 

Gizem Yalcin Williams is an assistant professor in the Department of Marketing at the McCombs School of Business. Stefano Puntoni is the Sebastian S. Kresge professor of marketing at The Wharton School, where he serves as the co-director of AI at Wharton. 

This is a condensed version of the article originally published in the September-October 2023 issue of the Harvard Business Review and reprinted with permission. Copyright 2023 by Harvard Business Publishing; all rights reserved.