Key OpenAI Researchers Resign to Focus on Ensuring Advanced AI Systems Remain Safe for Humanity

Two prominent researchers, Jan Leike and Ilya Sutskever, have resigned from OpenAI to establish a new organization called Anthropic, dedicated to ensuring that advanced AI systems remain aligned with human values and interests as they become more powerful and capable. Their goal is to develop techniques for “superalignment,” which aims to create AI systems that are not only highly capable but also reliably pursue intended goals and respect human preferences. Leike and Sutskever believe that as AI systems become superintelligent, it will be crucial to have robust methods for aligning their behavior with human values to prevent potential catastrophic outcomes. Anthropic plans to collaborate with other organizations working on AI safety and alignment, recognizing the immense challenge and importance of this endeavor for the future of humanity.

Source: https://www.businessinsider.com/jan-leike-ilya-sutskever-resignations-superalignment-openai-superintelligence-safe-humanity-2024-5