Geoffrey Hinton, 'Godfather of AI,' Proud Student Fired Sam Altman

Geoffrey Hinton, the newly minted 2024 Nobel Prize winner in physics and renowned “godfather of AI,” has publicly expressed pride in his former student Ilya Sutskever for his role in the brief ouster of Sam Altman from OpenAI in November 2023. Speaking at a press conference following his Nobel Prize announcement, Hinton made pointed comments about OpenAI’s shift from safety-focused research to profit-driven operations under Altman’s leadership.

Hinton, who supervised Sutskever’s doctorate in computer science at the University of Toronto in 2013, stated: “I’m particularly proud of the fact that one of my students fired Sam Altman.” He elaborated on his concerns about OpenAI’s trajectory, noting that the organization was originally established with a strong emphasis on safety and the responsible development of artificial general intelligence (AGI). However, Hinton criticized Altman for being “much less concerned with safety than with profits,” calling this shift “unfortunate.”

The comments reference the dramatic events of November 17, 2023, when OpenAI’s board removed Altman as CEO, citing that he “was not consistently candid in his communications with the board.” Sutskever, who cofounded OpenAI with Altman and served as chief scientist, was instrumental in the board’s decision. However, he later expressed regret and joined other employees in calling for Altman’s reinstatement, which ultimately occurred.

Sutskever departed OpenAI in May 2024 and announced in June that he was launching Safe Superintelligence Inc., a new AI company focused on safety. Hinton praised Sutskever’s foresight in recognizing both AI’s potential and its dangers, noting that Sutskever understood the technology’s capabilities before Hinton himself did.

Hinton has been a vocal critic of AI’s potential dangers, warning in interviews that AI systems could eventually manipulate humans by learning from vast amounts of literature and political strategy. He estimated it could take between five and 20 years before AI becomes a real threat, though he acknowledged the threat might not materialize.

The controversy surrounding Altman’s leadership extends beyond Hinton’s criticism. Elon Musk, Altman’s fellow OpenAI cofounder who left the board in 2018, has repeatedly criticized the company’s transformation from an open-source nonprofit into what he calls “a closed source, maximum-profit company effectively controlled by Microsoft.” OpenAI is currently set to transition into a for-profit company within the next two years, further fueling debates about the organization’s mission and priorities.

Key Quotes

I’m particularly proud of the fact that one of my students fired Sam Altman.

Geoffrey Hinton made this statement at a press conference following his 2024 Nobel Prize announcement, referring to Ilya Sutskever’s role in OpenAI’s board decision to remove Altman as CEO. The comment underscores Hinton’s support for prioritizing AI safety over commercial interests.

So OpenAI was set up with a big emphasis on safety. Its primary objective was to develop artificial general intelligence and ensure that it was safe. And over time, it turned out that Sam Altman was much less concerned with safety than with profits. And I think that’s unfortunate.

Hinton directly criticized OpenAI’s shift under Altman’s leadership, highlighting what he sees as a departure from the company’s founding mission. This statement from the ‘godfather of AI’ carries significant weight in debates about responsible AI development.

They will be able to manipulate people, right? And these will be very good at convincing people because they’ll have learned from all the novels that were ever written — all the books by Machiavelli, all the political connivances, they’ll know all that stuff.

During a CBS ‘60 Minutes’ interview in October 2023, Hinton warned about AI’s potential to manipulate humans by learning from vast amounts of literature and strategic texts. This reflects his broader concerns about AI safety that motivated his support for Sutskever’s actions.

OpenAI was created as an open source (which is why I named it ‘Open’ AI), non-profit company to serve as a counterweight to Google, but now it has become a closed source, maximum-profit company effectively controlled by Microsoft. Not what I intended at all.

Elon Musk, OpenAI’s cofounder who left the board in 2018, criticized the company’s transformation in a February 2023 post. His comments align with Hinton’s concerns and demonstrate that criticism of OpenAI’s direction comes from multiple influential figures in the AI community.

Our Take

Hinton’s public support for Sutskever’s actions represents a significant moment in AI governance debates. When the field’s most respected pioneer explicitly endorses prioritizing safety over profits, it signals a potential inflection point for the industry. The fact that Sutskever founded Safe Superintelligence Inc. after leaving OpenAI suggests that leading researchers believe existing institutions have become too compromised by commercial pressures to adequately address safety concerns.

This controversy also reveals the inherent instability of hybrid organizational models in AI development. OpenAI’s attempted balance between nonprofit mission and for-profit operations has created structural tensions that erupted in Altman’s firing and subsequent reinstatement. As AI capabilities approach more powerful systems, these governance questions will only become more critical. The industry may need entirely new organizational frameworks that can sustain safety commitments while remaining competitive and innovative.

Why This Matters

This story highlights the fundamental tension at the heart of AI development: the balance between rapid commercialization and safety considerations. Hinton’s Nobel Prize-winning credibility lends significant weight to his criticism of OpenAI’s direction under Altman, potentially influencing public discourse and regulatory approaches to AI governance.

The conflict between OpenAI’s founding mission as a safety-focused nonprofit and its evolution toward a profit-driven model reflects broader industry trends. As AI capabilities advance toward artificial general intelligence, the stakes of prioritizing commercial interests over safety protocols become increasingly consequential. Hinton’s warnings about AI’s potential to manipulate humans underscore the urgency of these concerns.

For businesses and policymakers, this serves as a cautionary tale about mission drift in technology companies. The fact that Sutskever—one of AI’s leading researchers—left to start a safety-focused company suggests deep concerns within the AI research community about current development trajectories. This could accelerate calls for AI regulation and influence how future AI companies structure their governance to maintain accountability and safety commitments.

For those interested in learning more about artificial intelligence, machine learning, and effective AI communication, here are some excellent resources:

Source: https://www.businessinsider.com/geoffrey-hinton-proud-student-fired-sam-altman-openai-2024-10