AI Expert Warns: Large Language Models Are 'Anti-Intelligence'

John Nosta, an innovation theorist and founder of NostaLab, is challenging the conventional understanding of artificial intelligence with a provocative claim: AI doesn’t think like humans at all—it’s actually “anti-intelligence.” In an interview with Business Insider, Nosta argued that large language models (LLMs) operate in ways fundamentally antithetical to human cognition.

At the core of Nosta’s argument is the assertion that AI doesn’t understand anything in the human sense. When humans think about an object like an apple, they contextualize it within space, time, memory, culture, and lived experience. LLMs, however, represent words as mathematical vectors in hyperdimensional space, searching for statistical patterns rather than building genuine comprehension. “An apple doesn’t exist as an apple,” Nosta explained. “It exists as a vector in a hyperdimensional space.” This means AI outputs are optimized for coherence rather than comprehension—producing responses that fit language patterns without actual reasoning.

Nosta believes AI is quietly reshaping how people think, particularly in workplace settings. Human cognition typically follows a path from confusion through exploration to confidence. AI flips this sequence entirely: “With AI, we start with structure,” he said. “We start with coherence, fluency, a sense of completeness, and afterwards we find confidence.” This inversion creates a powerful illusion where polished, authoritative-sounding AI answers are accepted immediately without the critical questioning that drives genuine understanding.

The real danger, according to Nosta, isn’t AI’s computational superiority—that’s inevitable. Rather, it’s how easily people outsource the most valuable parts of thinking. “It’s the stumbles, it’s the roughness, it’s the friction that allows us to get to observations and hypotheses that really develop who we are,” he warned. As companies push employees to go “all in” on AI for writing and decision-making, speed and fluency are being mistaken for understanding.

This concern extends beyond theory. Oxford University Press researchers found AI makes students faster and more fluent while stripping away depth. The Work AI Institute reported that generative AI creates an “illusion of expertise,” making users feel more productive even as underlying skills erode. Mehdi Paryavi, CEO of the International Data Center Authority, described this phenomenon as “quiet cognitive erosion,” warning that over-reliance on AI can undermine human confidence and capability.

Key Quotes

My conclusion is that artificial intelligence is antithetical to human cognition. I even call it anti-intelligence.

John Nosta, founder of NostaLab innovation think tank, makes his central argument that AI operates in fundamentally opposite ways to human thinking, challenging the notion that AI represents a form of intelligence comparable to human cognition.

An apple doesn’t exist as an apple. It exists as a vector in a hyperdimensional space.

Nosta explains how large language models represent concepts mathematically rather than contextually, highlighting the fundamental difference between AI pattern-matching and human understanding rooted in experience and meaning.

With AI, we start with structure. We start with coherence, fluency, a sense of completeness, and afterwards we find confidence.

Nosta describes how AI inverts the natural human cognitive process, delivering polished answers first rather than allowing the exploratory thinking that builds genuine understanding—a shift he considers dangerous for human intellectual development.

If you come to believe that AI writes better than you and thinks smarter than you, you will lose your own confidence in yourself.

Mehdi Paryavi, CEO of the International Data Center Authority, warns about the psychological impact of AI over-reliance, describing how excessive dependence on AI tools can erode human self-confidence and capability in what he calls ‘quiet cognitive erosion.’

Our Take

Nosta’s “anti-intelligence” framework is a crucial corrective to the breathless AI hype dominating current discourse. The distinction between pattern-matching coherence and genuine comprehension isn’t semantic—it’s fundamental to understanding AI’s limitations and risks. What’s particularly insightful is his focus on the cognitive process inversion: by delivering polished answers first, AI eliminates the productive struggle that builds expertise. This echoes concerns in educational psychology about “desirable difficulties”—challenges that slow learning initially but deepen retention and understanding. The convergence of warnings from Oxford researchers, the Work AI Institute, and industry leaders suggests we’re witnessing early signs of a cognitive crisis that could reshape knowledge work. Organizations need to move beyond productivity metrics and consider whether their AI strategies are building or eroding genuine human capability. The solution isn’t rejecting AI, but intentionally designing friction back into AI-augmented workflows—preserving the questioning, exploration, and uncertainty that drive innovation.

Why This Matters

This analysis matters because it challenges the prevailing narrative that AI is simply a productivity tool or thinking partner. Nosta’s “anti-intelligence” framework reveals a fundamental mismatch between how AI operates and how humans develop genuine understanding. As organizations rapidly integrate AI into workflows—from writing to analysis to decision-making—the risk isn’t just automation replacing jobs, but the subtle erosion of critical thinking skills that define human expertise.

The convergence of warnings from multiple sources—academic researchers, industry advisors, and innovation theorists—suggests this isn’t isolated concern but an emerging pattern worthy of serious attention. The “illusion of expertise” phenomenon is particularly troubling for knowledge workers who may feel more productive while actually becoming less capable. This has profound implications for education, workforce development, and organizational strategy. Companies rushing to maximize AI adoption may inadvertently create workforces that are faster but less thoughtful, more fluent but less innovative. The challenge ahead isn’t choosing between humans and AI, but designing AI integration that preserves the cognitive friction essential to genuine learning and innovation.

Source: https://www.businessinsider.com/ai-human-intelligence-impact-at-work-2026-1