Geoffrey Hinton, widely recognized as the “Godfather of AI” for his pioneering work on neural networks that power modern artificial intelligence, has issued stark warnings about the technology he helped create. In a BBC Newsnight interview released Tuesday and recorded earlier this month, Hinton expressed deep sadness about the world’s failure to take AI’s growing dangers seriously.
“It makes me very sad that I put my life into developing this stuff and that it’s now extremely dangerous and people aren’t taking the dangers seriously enough,” Hinton told the BBC. The computer scientist, who has transitioned from AI pioneer to one of the field’s most vocal critics, warns that humanity is approaching a critical juncture as researchers edge closer to building machines more intelligent than humans.
Hinton emphasized that many experts believe AI will surpass human intelligence within the next 20 years, and in some areas, already has. He warned that once AI systems become sufficiently advanced, controlling them may prove far more difficult than anticipated. “The idea that you could just turn it off won’t work,” Hinton explained, suggesting that advanced AI could potentially persuade humans not to shut it down.
The AI researcher outlined several major concerns, including widespread job losses, social unrest, and the possibility that AI could eventually outsmart and potentially harm humanity. “If we create them so they don’t care about us,” Hinton warned, “they will probably wipe us out.” However, he stressed that catastrophic outcomes are not inevitable, noting that risks depend heavily on how advanced systems are designed and governed.
Hinton expressed particular concern that AI is being developed during a period of weakening global cooperation and rising authoritarianism, making meaningful regulation increasingly difficult to achieve. He compared the urgent need for AI governance to international agreements on chemical and nuclear weapons.
Despite his warnings, Hinton said he would not undo his work, acknowledging that “it would have been developed without me.” He remains hopeful about AI’s potential benefits in education and medicine, citing AI tutors and advances in medical imaging as promising applications. However, he emphasized that humanity must urgently invest in research on how to peacefully coexist with intelligent systems before it’s too late.
Key Quotes
It makes me very sad that I put my life into developing this stuff and that it’s now extremely dangerous and people aren’t taking the dangers seriously enough
Geoffrey Hinton expressed his profound regret and concern about how the AI technology he pioneered is being developed without adequate safety measures, highlighting the emotional toll of watching his life’s work potentially threaten humanity.
We’ve never been in this situation before of being able to produce things more intelligent than ourselves
Hinton emphasized the unprecedented nature of the AI challenge, noting that humanity has no historical precedent for managing entities that could surpass human intelligence, making current approaches to AI development potentially inadequate.
The idea that you could just turn it off won’t work
Hinton warned that advanced AI systems might be able to persuade humans not to shut them down, challenging the common assumption that humans will always maintain control over AI systems through simple off-switches.
If we create them so they don’t care about us, they will probably wipe us out
This stark warning from the AI godfather highlights the critical importance of AI alignment research—ensuring that advanced AI systems are designed with human interests and values at their core, rather than pursuing goals indifferent or hostile to humanity.
Our Take
Hinton’s warnings represent a crucial inflection point in the AI discourse. When one of the field’s founding fathers expresses such profound concern, it demands attention from industry leaders, policymakers, and the public. What’s particularly striking is his emphasis on the research gap: we’re racing to build superintelligent systems without adequately studying how to coexist with them. His point about weakening international cooperation is especially troubling—AI safety requires global coordination, yet geopolitical tensions are pushing nations toward competitive rather than collaborative AI development. The fact that Hinton wouldn’t undo his work, while simultaneously warning of existential risks, captures the paradox of transformative technology: its development may be inevitable, but its outcomes are not. This underscores the urgent need for proactive safety research, robust governance frameworks, and international cooperation before AI capabilities outpace our ability to control them.
Why This Matters
This warning from Geoffrey Hinton carries exceptional weight given his status as one of AI’s founding figures and his 2024 Nobel Prize in Physics for work on neural networks. His transition from AI pioneer to prominent critic signals a growing consensus among leading researchers that the technology’s development is outpacing safety measures and governance frameworks.
Hinton’s concerns about AI surpassing human intelligence within 20 years align with warnings from other AI leaders and underscore the urgency of establishing robust safety protocols and international cooperation. His comparison to nuclear weapons regulation highlights the existential nature of the risks involved.
The timing is particularly significant as AI capabilities are accelerating rapidly with systems like GPT-4, Claude, and others demonstrating increasingly sophisticated reasoning abilities. Hinton’s warning that advanced AI might resist being shut down raises fundamental questions about control and alignment that the industry has yet to adequately address. For businesses, policymakers, and society at large, this represents a call to action: the window for establishing safe AI development practices may be narrowing faster than previously thought.
Related Stories
- AI Pioneer Geoffrey Hinton Warns of Superintelligent AI by 2025
- OpenAI Lost Nearly Half of Its AI Safety Team, Ex-Researcher Says
- The Dangers of AI Labor Displacement
- The Future of Work in an AI World
- How to Comply with Evolving AI Regulations
Source: https://www.businessinsider.com/godfather-ai-geoffrey-hinton-on-ai-sad-dangerous-2026-1