Ilya Sutskever, the co-founder and former chief scientist of OpenAI, has left the company to start a new venture focused on developing safe “super intelligence” or artificial general intelligence (AGI). Sutskever believes that AGI, which would be an AI system with human-level or greater intelligence across a wide range of tasks, is inevitable and could pose existential risks if not developed safely. His new company, called Anthropic, aims to create AGI that is aligned with human values and interests. Sutskever argues that current AI systems are narrow and lack the general reasoning abilities of humans, and that developing safe AGI is crucial to mitigating potential risks. He plans to take a different approach from OpenAI, which has focused more on developing powerful but specialized AI models. Anthropic will prioritize safety and ethics from the outset, aiming to create an AGI system that is provably aligned with human values. The company has raised $200 million in funding from investors like Dustin Moskovitz and Sam Altman.