The debate over how to measure successful AI adoption in enterprise settings has intensified following McKinsey & Company CEO Bob Sternfels’ announcement that the consulting firm now employs 25,000 AI agents alongside 40,000 human employees. However, Dan Priest, Chief AI Officer at PwC, has challenged this numbers-focused approach, arguing that counting agents is “probably the wrong measure” for evaluating AI deployment success.
Priest told Business Insider that the quality of AI agents matters far more than quantity. He advocates for measuring AI success through two key metrics: the number of agents that serve as true authorities on specific tasks (encouraging human adoption), and the number of humans actively using those agents to achieve prioritized business outcomes, such as transforming customer call center experiences.
At PwC, approximately 82% of employees actively use the firm’s AI tools, a metric Priest considers more meaningful than raw agent counts. The firm tracks how agents interact, their task completion accuracy, and whether they improve process speed, quality, and performance. Critically, humans remain accountable for reviewing agent output and providing feedback, with humans retaining certification, licensing, and empowerment responsibilities.
Priest revealed that both PwC and its clients initially adopted a “bottom-up” approach to AI implementation, attempting to crowdsource adoption strategies from employees when business leaders lacked clear answers. This approach delivered “fairly disappointing” returns on investment. The firm has since pivoted to a “top-down” strategy that focuses on fewer agents with deeper mastery of limited task sets.
This controlled approach involves carefully managing agent permissions for data access, task performance, and outcome production. These permissions are actively monitored, expire periodically, and require ongoing management. Over the past two years, AI agents have become the dominant framework for discussing corporate AI adoption, with Priest affirming that “agents are at a place now where they’re the best way to unlock value from AI.” However, he emphasizes that effective human utilization—not automation potential—remains the true measure of an agent’s value.
Key Quotes
There was this emerging bragging right around the number of agents I had or I have in production. I think that’s probably the wrong measure.
Dan Priest, PwC’s Chief AI Officer, directly challenges the industry trend of measuring AI success by agent count, specifically responding to McKinsey’s announcement of 25,000 AI agents.
Agents are at a place now where they’re the best way to unlock value from AI.
Despite his criticism of quantity-focused metrics, Priest affirms that AI agents represent the optimal approach for enterprise AI deployment, emphasizing their central role in the current AI adoption landscape.
The human is still accountable. The humans are the ones who get certified. The humans are the ones who get licensed. The humans are the ones who get empowered.
Priest emphasizes that despite AI automation, human oversight and accountability remain paramount, addressing concerns about AI replacing human judgment in professional services.
That agent, I’ve given them permission to access certain data sets. I’ve given them permission to perform certain tasks. I’ve given them permission to produce certain outcomes. Those permissions are monitored, they expire, they’re managed.
Describing PwC’s top-down approach, Priest outlines the controlled, governance-focused framework that replaced their disappointing bottom-up experimentation phase.
Our Take
This public disagreement between major consulting firms reveals an industry grappling with how to define and communicate AI success. Priest’s pushback against McKinsey’s agent-counting approach is significant because it challenges the metrics-driven narrative that has dominated AI adoption discussions. His focus on human utilization rates (82% at PwC) and business outcomes represents a more mature understanding of AI value creation. The admission that bottom-up approaches delivered “fairly disappointing” ROI is particularly candid and valuable for enterprises navigating their own AI journeys. This debate will likely accelerate the development of standardized AI success metrics beyond vanity numbers, pushing the industry toward outcome-based evaluation frameworks that balance automation potential with human oversight and governance requirements.
Why This Matters
This debate between two major consulting firms highlights a critical inflection point in enterprise AI adoption. As companies rush to demonstrate AI leadership, the industry faces a fundamental question: should success be measured by the number of AI agents deployed or by meaningful business outcomes and human engagement?
Priest’s critique of McKinsey’s approach signals growing maturity in the AI implementation space, moving beyond “AI theater” toward substantive value creation. His emphasis on human accountability and controlled, top-down deployment addresses real concerns about AI governance, data security, and quality control that have plagued early adoption efforts.
The shift from bottom-up experimentation to strategic, top-down implementation reflects lessons learned from disappointing ROI in early AI projects. For businesses investing heavily in AI transformation, this suggests that focused, well-governed agent deployment with high human adoption rates delivers better results than proliferating numerous agents without clear purpose or oversight. This framework will likely influence how enterprises evaluate AI vendors and structure their own AI strategies moving forward.
Related Stories
- PwC Hosts ‘Prompting Parties’ to Train Employees on AI Usage
- CEOs Express Insecurity About AI Strategy and Implementation
- Business Leaders Share Top 3 AI Workforce Predictions for 2025
- The Future of Work in an AI World
- JPMorgan Replaces Proxy Advisors with AI Platform for Voting
Source: https://www.businessinsider.com/ai-agents-consulting-firms-mckinsey-pwc-2026-1