A new survey reveals a significant challenge in workplace AI adoption: nearly 40% of AI’s productivity value is lost to rework and error correction, raising questions about the technology’s promised efficiency gains. The research, conducted by Hanover Research for HR and finance software provider Workday, surveyed 3,200 leaders and employees from companies with annual revenues exceeding $100 million.
Emilie Schario, chief operating officer at Kilo Code, a remote AI coding startup, experienced this firsthand when an AI tool fabricated details about her personal life while editing a blog post. The tool claimed she had blocked time to attend her daughter’s school play—despite Schario being the mother of three young boys, not a daughter. She spent nearly half the time she invested in writing the original draft just reviewing the AI-generated revision for such errors.
The Workday survey found that only 14% of employees consistently achieve clear, positive outcomes from AI technology. This productivity paradox stems from AI’s tendency to produce hallucinations and errors that require careful human review, cutting into the time savings the technology promises.
Training emerges as a critical gap: While 66% of leaders cite skills training as a top priority, only 37% of employees facing the most AI rework report receiving adequate training. Furthermore, fewer than half of employee job descriptions have been updated to reflect AI capabilities, leaving workers to balance faster AI-driven output with unchanged expectations around accuracy, judgment, and risk.
The findings align with other recent studies questioning AI’s return on investment. A global survey of 2,000 CEOs by the IBM Institute for Business Value and Oxford Economics found that only 25% of AI efforts delivered expected returns. Similarly, an MIT study revealed that 95% of organizations reported no measurable ROI from AI based on publicly disclosed initiatives and executive interviews.
Despite these challenges, experts like Workday executive Aashna Kircher remain optimistic, suggesting that editing AI outputs will become less burdensome as the technology advances and workers receive better training on prompt writing and critical evaluation. Schario herself continues using AI tools, noting that for tasks she finds unpleasant like writing, the speed benefits outweigh the error-checking requirements—as long as users remain vigilant about reviewing outputs before publishing.
Key Quotes
I don’t have a daughter, and there was no school play
Emilie Schario, COO of Kilo Code, discovered this fabricated detail when an AI tool edited her blog post about work-life balance. This example illustrates how AI hallucinations can create entirely false personal details, highlighting the critical need for human review of AI-generated content.
We’re seeing a need for organizations to better enable their people to evaluate the output and make the right decisions in terms of how it’s used
Workday executive Aashna Kircher emphasizes that the solution isn’t abandoning AI but rather investing in training and critical thinking skills. This reflects a growing recognition that successful AI adoption depends as much on human capability development as on the technology itself.
I think where people get themselves in trouble is that they take that output of the AI agent, they don’t review it closely, and they just kind of pass it on. At the end of the day, you are still responsible for your output, whether it was generated by an AI agent or not
Schario’s warning captures the accountability challenge of AI adoption. Despite experiencing AI hallucinations firsthand, she continues using the tools but stresses that users cannot abdicate responsibility for verifying accuracy—a principle many organizations have yet to formalize in their AI policies.
Our Take
This research reveals what many AI practitioners have quietly observed: the gap between AI’s promise and its practical reality remains substantial. The 40% value loss to rework isn’t just a productivity issue—it’s a trust issue that could slow enterprise AI adoption if not addressed systematically. What’s particularly striking is the disconnect between leadership priorities and employee reality: leaders recognize training as critical, yet most workers aren’t receiving it. This suggests many organizations are treating AI as a plug-and-play solution rather than a transformative technology requiring cultural and operational changes. The fabricated daughter story is more than amusing—it’s a warning about AI’s confidence in presenting false information. As these tools become more sophisticated and convincing, the human skill of critical evaluation becomes more, not less, essential. Organizations that invest now in training, updated processes, and clear accountability frameworks will likely see genuine productivity gains, while those chasing quick wins may find themselves trapped in an expensive cycle of AI-generated work and human cleanup.
Why This Matters
This research exposes a critical reality check for the AI revolution in the workplace. While AI tools promise unprecedented productivity gains, the hidden cost of error correction and rework threatens to undermine these benefits significantly. The finding that 40% of AI’s value evaporates through necessary corrections suggests that organizations may be overestimating their AI ROI and underinvesting in the human infrastructure needed to make AI truly effective.
The training gap is particularly concerning: companies are deploying powerful AI tools without adequately preparing employees to use them effectively or critically evaluate outputs. This creates a dangerous scenario where workers may either waste time over-checking AI work or, worse, trust fabricated information that could damage credibility and decision-making.
For businesses considering AI investments, this data underscores that technology alone isn’t the solution—successful AI adoption requires comprehensive change management, updated job descriptions, skills training, and clear protocols for human oversight. The broader implication is that we’re still in the early stages of learning how to productively integrate AI into workflows, and the path to genuine productivity gains may be longer and more complex than many anticipated.
Related Stories
- PwC Hosts ‘Prompting Parties’ to Train Employees on AI Usage
- The Future of Work in an AI World
- Business Leaders Share Top 3 AI Workforce Predictions for 2025
- Tailwind CEO Blames AI for 75% Engineering Layoffs, 80% Revenue Drop
- The Dangers of AI Labor Displacement
Source: https://www.businessinsider.com/workday-study-looks-at-time-spent-fixing-ai-errors-2026-1