Dive Brief:
- While daily use of AI is widespread, there are measurable behaviors that separate routine use of the technology from true, sophisticated human-AI interaction. That’s according to a joint study from KPMG LLP and the University of Texas at Austin, which analyzed 1.4 million workplace AI interactions from 2,500 employees.
- The behaviors can be turned into teachable benchmarks which, when scaled, can close the AI impact gap by focusing on targeted training and workflow integration rather than tool deployment alone, according to the report.
- “The gap between routine and sophisticated AI use is not hidden in prompts themselves, but in patterns of engagement,” said Anu Puvvada, KPMG Studio Leader. “Once those patterns are visible, they become possible to recognize, discuss and scale.”
Dive Insight:
The most sophisticated AI users are not defined by technical expertise or frequency of use but by how they collaborate with AI, which includes iterating, framing problems clearly and guiding outputs over time, according to the report.
These users treat AI not as a short-term productivity tool but a longer-term “thinking partner.”
The effect of using these behaviors as teachable metrics could be transformative, with the report finding only 5% of workers consistently using AI in ways that materially improve the quality of their work.
The findings also challenge a common assumption that improving AI outcomes is primarily a matter of better prompting or broader access to tools. Instead, the research suggests effective human-AI collaboration stems from how employees integrate AI into their day-to-day workflows.
Sophisticated use correlated strongly with four signals: how often users return to AI, how persistently they refine outputs, how ambitious their initial requests are and how intentionally they select tools or models.
“We were looking for people who had figured out how to think with the model, not just ask it questions,” said Jaime Schmidt, accounting professor at UT Austin.
KPMG has already started to apply these insights internally, launching companywide training programs to begin reshaping behaviors.
The organization has embedded these practices into its learning ecosystem through role-based training, playbooks and peer-led networks aimed at reinforcing what it calls “AI-first” ways of working.
“We realized early on that access to AI alone doesn’t drive better outcomes,” said Steve Chase, global head of AI and digital innovation at KPMG. “That’s why we put a deliberate set of AI‑enabled tools, training programs, and routines in place to make effective behaviors visible and expected, and to teach better problem framing, stronger supervision of AI, and purposeful iteration.”
For CIOs and IT leaders, the findings demonstrate AI success relies less on scaling up new tools and more on refining how employees engage with them.
Learning how to best use the tools includes defining what “good” AI use looks like, embedding those behaviors into training and performance expectations, and creating feedback loops that reinforce more sophisticated collaboration over time.






Leave a Reply