Innovation theorist John Nosta is raising alarm bells about a hidden danger of workplace AI adoption that goes beyond typical automation concerns. While AI is marketed as a pure performance enhancer that helps workers write faster, analyze better, and perform at higher levels, Nosta warns of what he calls the “AI rebound effect” — a phenomenon where workers’ baseline skills actually deteriorate after relying on AI assistance.
Nosta, founder of NostaLab, an innovation and tech think tank, explained the concept using a medical example: a doctor performing a colonoscopy with AI assistance becomes better at spotting small polyps when the technology is scanning alongside them. However, when that same doctor performs the procedure the next day without AI support, their skill level falls below their original baseline — not just back to where they started, but actually worse than before they used the technology.
The problem extends beyond simple dependency to what Nosta describes as cognitive regression. Workers don’t just become reliant on AI; they actually lose competence in their core skills. Even more concerning is that AI creates an “overinflated sense of ability” where workers feel more capable even as their independent judgment weakens — a phenomenon Nosta calls “really dangerous,” especially in high-stakes environments.
This concern is echoed by prominent researchers including Rebecca Hinds, head of the Work AI Institute, and Nobel Prize-winning physicist Saul Perlmutter, who have warned that AI creates an illusion of understanding while undermining genuine judgment. An Oxford University Press report from October found that AI makes students faster but less deep in their thinking, while Professor Kimberley Hardcastle from Northumbria University warned of the “atrophy of epistemic vigilance” — the erosion of our ability to independently verify and construct knowledge without algorithmic assistance.
Nosta describes a growing “cognitive codependent relationship,” particularly among younger workers entering AI-saturated workplaces. His prescription for avoiding “cognitive atrophy” involves maintaining what he calls “cognitive grit” — intentionally preserving friction in work processes and using AI to learn rather than to bypass learning altogether. “We have to sustain a level of cognitive risk,” Nosta emphasized, warning that the biggest threat in the AI era may not be smarter machines, but humans slowly forgetting how to think without them. “For the first time in history,” he concluded, “human cognition is on the obsolescence chopping block.”
Key Quotes
The skill set actually falls below baseline.
John Nosta, innovation theorist and founder of NostaLab, describing the AI rebound effect where workers’ abilities deteriorate below their original skill level after becoming dependent on AI assistance — not just returning to baseline, but actually becoming worse than before they used the technology.
We actually have an overinflated sense of ability through AI.
Nosta warning about how AI distorts workers’ self-assessment, making them feel more capable even as their independent skills weaken. He described this false confidence as “really dangerous,” particularly in high-stakes professional environments where overestimating one’s abilities can lead to serious errors.
For the first time in history, human cognition is on the obsolescence chopping block.
Nosta’s stark warning about the unprecedented nature of the AI threat, suggesting that unlike previous technological revolutions that replaced physical labor, AI represents the first technology that could make human thinking itself obsolete if we don’t deliberately preserve our cognitive capabilities.
We have to sustain a level of cognitive risk.
Nosta’s prescription for avoiding cognitive atrophy, advocating for intentionally maintaining friction in work processes and using AI as a learning tool rather than a substitute for thinking. This represents a deliberate strategy to preserve what he calls “cognitive grit” in an AI-saturated workplace.
Our Take
The AI rebound effect reveals a paradox at the heart of workplace AI adoption: the tools designed to make us better may actually be making us worse. This isn’t just about automation anxiety — it’s about the fundamental transformation of human capability. What makes this particularly insidious is the confidence gap: workers feel more competent precisely when they’re becoming less so. Organizations implementing AI need to recognize that optimization and capability-building can be opposing forces. The solution isn’t rejecting AI, but redesigning how we integrate it — treating AI as a teaching assistant rather than a replacement for thinking. This requires deliberate friction, regular “AI-free” practice, and assessment systems that measure independent capability, not just AI-assisted output. The stakes are existential: we’re not just automating tasks, we’re potentially automating away the cognitive muscles that make us human.
Why This Matters
This analysis matters because it challenges the dominant narrative around workplace AI adoption and reveals a critical blind spot in how organizations are implementing these tools. While most AI discussions focus on job displacement or productivity gains, the AI rebound effect represents a more insidious threat: the gradual erosion of human expertise and judgment that happens invisibly over time.
For businesses rushing to integrate AI across their operations, this research suggests they may be creating long-term vulnerabilities even as they achieve short-term efficiency gains. Workers who become dependent on AI assistance may lose the foundational skills needed to function when systems fail or in situations requiring independent judgment. This has profound implications for workforce development, training programs, and organizational resilience.
The phenomenon is particularly concerning for younger workers who may never develop strong baseline skills if they rely on AI from the start of their careers. As AI becomes ubiquitous in knowledge work, maintaining human cognitive capabilities may require deliberate organizational strategies — not just adopting AI, but thoughtfully designing how it’s used to preserve rather than replace human thinking. This represents a fundamental shift in how we must approach AI implementation in the workplace.
Related Stories
- The Future of Work in an AI World
- The Dangers of AI Labor Displacement
- PwC Hosts ‘Prompting Parties’ to Train Employees on AI Usage
- Business Leaders Share Top 3 AI Workforce Predictions for 2025
- Microsoft AI CEO’s Career Advice for Young People in the AI Era
Source: https://www.businessinsider.com/ai-can-make-you-better-then-worse-at-your-job-2026-1