Geoffrey Hinton, often referred to as the ‘godfather of AI,’ has issued a stark warning about the potential emergence of superintelligent AI as early as 2025. In a recent interview with Time magazine, Hinton expressed concerns that AI systems could surpass human intelligence within the next year, potentially leading to existential risks for humanity. He specifically highlighted that AI models are already showing signs of reasoning capabilities that could rapidly evolve into superintelligence. Hinton emphasized that current AI systems are becoming increasingly sophisticated in their ability to process and understand information, learning much faster than humans. He warned that once AI systems become smarter than humans, they might be able to improve themselves at an exponential rate, potentially leading to a scenario where they could take control of critical systems and infrastructure. The AI pioneer also discussed the immediate risks of AI, including its potential for spreading misinformation and manipulating public opinion, particularly during elections. Hinton’s warnings carry significant weight given his background as a former Google researcher and his fundamental contributions to deep learning technology. He suggested that the development of AI safety measures and regulations is crucial but may not be sufficient to prevent potential risks. The article concludes with Hinton’s call for more focused attention on AI safety and the need for international cooperation to address these challenges before they become unmanageable.
Source: https://www.businessinsider.com/ai-godfather-geoffrey-hinton-superintelligence-risk-takeover-2025-4