Behaviors Behind High‑Impact AI Use

UT Austin and KPMG analysis of 1.4 million interactions shows how employees achieve sophisticated AI collaboration

iStock 2163352306

A landmark study of 1.4 million real workplace interactions with artificial intelligence reveals teachable differences between routine and sophisticated AI use that offer organizations a concrete road map for identifying and scaling high-impact AI capability.

The joint study by KPMG LLP, the U.S. audit, tax, and advisory firm, and the McCombs School of Business at The University of Texas at Austin identifies distinct, observable patterns in how high‑impact users frame problems, guide AI reasoning, and apply AI across complex tasks that KPMG is applying internally and in its work for clients. The study is published today in Harvard Business Review.

The researchers spent eight months studying KPMG LLP’s back-office operations, analyzing how people use AI at work. Users who were most successful with AI, the “sophisticated” users in the study, were not those who simply use it most frequently or those with the best technical skills; rather, they were the ones who excel in patterns of engagement with AI to frame problems, direct the AI model’s approach to tasks, and apply AI across their work.

What Is Sophisticated AI Use?

To move beyond assumptions about what “good” AI use looks like, KPMG LLP collaborated with Zach Kowaleski, Nick Hallman, and Jaime Schmidt, faculty members in McCombs’ Shulkin Department of Accounting, to analyze behavioral signals embedded in real-world AI interactions, evaluating more than 30 characteristics of prompt behavior across months of usage data, including task complexity, prompting techniques, and iteration patterns.

“We weren’t looking for power users in the abstract,” said Schmidt, McCombs professor of accounting and director for C. Aubrey Smith Center for Auditing Education & Research. “We were looking for people who had figured out how to think with the model, not just ask it questions.”

What separated the best users wasn’t experience or technical know-how. This research approach surfaced consistent differences in how a small group of sophisticated users engaged with AI over time.

What Sophisticated AI Behavior Actually Looks Like

Sophisticated users treated AI as a reasoning partner, shaping how it approached problems by asking the model to assume a certain role or perspective; providing concrete direction and examples; showing the AI how to reason through a task; requiring the model to explain how it got to a response; and offering ongoing feedback. Rather than accepting first outputs, they refined the model’s work over multiple exchanges and applied it to their most complex and ambitious tasks.

They also set boundaries, specified structure, articulated clear objectives, and delegated cognitively demanding tasks across brainstorming, analysis, technical guidance, and problem-solving. For these users, AI was being used as a general cognitive tool, not a narrow productivity aid.

Crucially, these behaviors left visible, measurable patterns that organizations can observe. Sophisticated use correlated strongly with four signals: how often users return to AI, how persistently they refine outputs, how ambitious their initial requests are, and how intentionally they select tools or models.

“The gap between routine and sophisticated AI use is not hidden in prompts themselves, but in patterns of engagement. And once those patterns are visible, they become possible to recognize, discuss, and scale,” said Anu Puvvada, KPMG Studio Leader, who led the research for the firm. “Iteration enables ambition, ambition drives strategic tool choice, and repeated success reinforces engagement.”

How KPMG Leveraged the Insights to Upskill Employees

KPMG undertook a firmwide training and enablement effort to help employees build the more sophisticated skills and behaviors identified in the research. Approximately 5% of users consistently demonstrated these behaviors across months of usage data, providing a clear, data-backed signal of what effective human-AI collaboration looks like in practice.

“We realized early on that access to AI alone doesn’t drive better outcomes, a challenge many organizations are still grappling with,” said Steve Chase, global head of AI and Digital Innovation at KPMG. “That’s why we put a deliberate set of AI‑enabled tools, training programs, and routines in place to make effective behaviors visible and expected, and to teach better problem framing, stronger supervision of AI, and purposeful iteration.”

For KPMG, these insights have been translated into a set of AI-First behaviors, supported by practical playbooks, training, and peer-led champion networks. By embedding these research-backed behaviors into the firmwide learning ecosystem — through the firm’s aIQ Learning Academy, role-based skills development, and hands-on practice — more of KPMG’s workforce can move from routine prompting to higher-impact human‑AI collaboration, using AI as a thinking partner to brainstorm, refine, and validate work through intentional iteration.

These same insights now inform how KPMG employees work with clients: helping them define what effective AI use looks like within their own organizations, build role‑aligned capabilities, and enable leaders to scale sophisticated human‑AI collaboration as part of everyday work.

Media Contacts:

Olivia Weiss

oweiss@kpmg.com

Alyssa Mora

alyssamora@kpmg.com

Judie Kinonen
judie.kinonen@mccombs.utexas.edu