Yuval Noah Harari, the renowned historian and author of “Sapiens: A Brief History of Humankind,” delivered a stark warning at the World Economic Forum in Davos on Tuesday about how the world is fundamentally misunderstanding the timeline and implications of artificial intelligence. Speaking to global leaders and tech executives, Harari argued that the greatest danger isn’t the speed of AI development, but rather the casual complacency with which society is treating this transformative technology.
Harari emphasized a critical disconnect in how different stakeholders perceive “long-term” planning. “A lot of the conversations here in Davos, when they say ’long term’ they mean like two years,” he explained. “When I mean long term, I think 200 years.” This temporal mismatch, he suggested, represents a fundamental failure to grasp the magnitude of AI’s potential impact on human civilization.
Drawing parallels to the Industrial Revolution, Harari noted that humanity has historically struggled to understand transformative technologies as they unfold. The deepest consequences of industrialization—including massive social, political, and geopolitical upheaval—took generations to fully emerge and were largely unpredictable in advance. “You can test for accidents,” he said. “But you cannot test the geopolitical implications or the cultural implications of the steam engine in a laboratory. It’s the same with AI.”
Perhaps most alarmingly, Harari warned that even if AI development were to halt immediately, the long-term effects of already-deployed systems remain unknowable. Using a vivid metaphor, he stated: “The stone has been thrown into the pool, but it just hit the water. We have no idea what waves have been created, even by the AIs that have been deployed a year or two ago.”
Harari joins a growing chorus of senior AI researchers and tech leaders expressing concerns about AI risks, ranging from widespread job displacement to existential threats. However, his primary concern isn’t uncertainty itself—it’s the lack of concern among those with the power to shape AI’s trajectory. He criticized how many powerful decision-makers are focused on short-term incentives like quarterly investor reports rather than long-term consequences. “Very smart and powerful people are worried about what their investors say in the next quarterly report,” he observed. “They think in terms of a few months, or a year or two.” Meanwhile, AI’s profound social consequences will unfold over centuries, whether humanity is prepared or not.
Key Quotes
A lot of the conversations here in Davos, when they say ’long term’ they mean like two years. When I mean long term, I think 200 years.
Yuval Noah Harari highlighted the fundamental disconnect between how tech leaders and historians view AI’s timeline, emphasizing that true long-term thinking requires a generational perspective spanning centuries rather than quarters.
You can test for accidents. But you cannot test the geopolitical implications or the cultural implications of the steam engine in a laboratory. It’s the same with AI.
Harari drew parallels to the Industrial Revolution, explaining why AI’s most profound impacts cannot be predicted through controlled testing, as they will emerge through complex social and geopolitical dynamics over time.
The stone has been thrown into the pool, but it just hit the water. We have no idea what waves have been created, even by the AIs that have been deployed a year or two ago.
Using this vivid metaphor, Harari warned that even currently deployed AI systems will have ripple effects that remain unknowable, suggesting that the consequences are already in motion regardless of future development.
Very smart and powerful people are worried about what their investors say in the next quarterly report. They think in terms of a few months, or a year or two.
Harari criticized the short-term incentive structures driving AI development, pointing out that those with the most power to shape AI’s future are focused on immediate financial returns rather than long-term societal consequences.
Our Take
Harari’s intervention at Davos cuts to the heart of a critical paradox in AI development: we’re building technology with century-spanning consequences using decision-making frameworks designed for quarterly earnings calls. His 200-year perspective isn’t alarmism—it’s historical realism. The Industrial Revolution analogy is particularly instructive because it reminds us that transformative technologies don’t just change what we do; they fundamentally reshape social structures, power dynamics, and human relationships in ways that take generations to fully manifest. What makes Harari’s warning especially urgent is that unlike the steam engine, AI is being deployed globally at unprecedented speed, yet our governance mechanisms remain rooted in 20th-century thinking. The fact that this message needed to be delivered at Davos—the epicenter of short-term capitalist thinking—underscores the challenge. Until corporate incentive structures align with civilizational timescales, we’ll continue throwing stones into the pool without understanding the waves we’re creating.
Why This Matters
Harari’s warning at Davos represents a crucial intervention in the global AI discourse, highlighting a dangerous temporal mismatch between AI development timelines and human planning horizons. As AI companies race to deploy increasingly powerful systems, the focus on quarterly earnings and short-term competitive advantages may be blinding leaders to civilization-altering consequences that will unfold over generations. This matters because the decisions being made today by tech executives, investors, and policymakers will shape humanity’s trajectory for centuries, yet these choices are often driven by incentives measured in months or years. The Industrial Revolution comparison is particularly apt—that transformation took over a century to fully reshape society, creating unforeseen consequences from urbanization to global conflict. If AI proves even more transformative, as many experts believe, then our current governance frameworks and corporate incentive structures are woefully inadequate. Harari’s message challenges the AI industry to think beyond product launches and market share, urging a fundamental reconsideration of how we approach the most powerful technology in human history. For businesses, workers, and society at large, this suggests that the AI disruption we’re experiencing now is merely the beginning of a multi-generational transformation.
Related Stories
- TIME100 Talks: The Transformative Power of AI
- AI Pioneer Geoffrey Hinton Warns of Superintelligent AI by 2025
- The Dangers of AI Labor Displacement
- The Future of Work in an AI World
- How to Comply with Evolving AI Regulations
Source: https://www.businessinsider.com/sapiens-author-ai-timeline-warning-lack-of-concern-2026-1