OpenAI is facing internal turbulence as the company navigates a controversial transition from its nonprofit roots to a for-profit structure, sparking debate among current and former employees about the company’s commitment to artificial general intelligence (AGI) safety.
The upheaval began last week when OpenAI’s Chief Technology Officer Mira Murati, along with top researchers Barret Zoph and Bob McGrew, announced their resignations on Wednesday. The following day, CEO Sam Altman confirmed that OpenAI is considering restructuring as a for-profit benefit corporation, moving away from its nonprofit origins. This restructuring coincides with efforts to raise billions in new investment.
While OpenAI hasn’t issued a formal statement about the changes, Altman addressed the topic at Italian Tech Week, describing the restructuring as part of “what it takes to get to our next stage.” However, the move has raised concerns among departing employees who believe Altman is prioritizing profit over safety.
Former policy researcher Gretchen Krueger, who joined OpenAI in 2019 specifically because of its nonprofit governance structure and profit caps, expressed disappointment on X (formerly Twitter). She stated that the transition to a public benefit corporation “feels like a step in the wrong direction” and argued that as one of the biggest developers of AGI, OpenAI needs “stronger mission locks” to ensure its commitment to developing AI that benefits all humanity.
The concerns echo earlier warnings from Jan Leike, OpenAI’s former safety leader, who resigned in May citing a “breaking point” with leadership over the company’s core priorities. Leike had initially believed OpenAI would be “the best place in the world” to conduct safety research but became disillusioned with the company’s direction.
Current employees, however, are defending the company publicly. Noam Brown, a researcher working on OpenAI’s new o1 model, pushed back against claims that the company has deprioritized research, stating “it’s the opposite.” The o1 model, released earlier this month, represents AI systems “designed to spend more time thinking before they respond.”
Mark Chen, senior vice president of research, also reaffirmed his commitment, writing that he “truly believes that OpenAI is the best place to work on AI” and that it’s “never wise to bet against us.” The diverging perspectives highlight the tension within OpenAI as it balances commercial ambitions with its founding mission to develop safe AGI.
Key Quotes
This feels like a step in the wrong direction, when what we need is multiple steps in the right direction.
Former OpenAI policy researcher Gretchen Krueger expressed this concern on X about the company’s transition to a for-profit benefit corporation. Her statement reflects broader worries among departing employees that OpenAI is abandoning the nonprofit governance structure and profit caps that originally attracted safety-conscious researchers to the company.
Those of us at @OpenAI working on o1 find it strange to hear outsiders claim that OpenAI has deprioritized research. I promise you all, it’s the opposite.
Noam Brown, a researcher at OpenAI, defended the company on X against claims that it has shifted focus away from research. His statement represents the perspective of current employees who maintain that OpenAI remains committed to its technical mission, despite the organizational changes and executive departures.
I truly believe that OpenAI is the best place to work on AI, and I’ve been through enough ups and downs to know it’s never wise to bet against us.
Mark Chen, senior vice president of research at OpenAI, reaffirmed his commitment to the company amid the turmoil. His statement suggests that veteran employees who have weathered previous controversies remain confident in OpenAI’s direction, contrasting sharply with the concerns expressed by departing staff.
By the time he left, however, he said he had reached a ‘breaking point’ with OpenAI’s leadership over the company’s core priorities.
This describes former safety leader Jan Leike’s resignation in May, which foreshadowed the current concerns about OpenAI’s priorities. Leike’s departure was an early warning sign that internal tensions over safety versus commercialization were reaching critical levels within the organization.
Our Take
The schism at OpenAI reveals a predictable but troubling pattern in AI development: as companies approach breakthrough capabilities and massive valuations, mission-driven structures give way to profit-seeking imperatives. What makes this particularly significant is OpenAI’s unique position—it essentially created the current AI boom with ChatGPT and now finds itself torn between its founding principles and market realities.
The timing is revealing: restructuring coinciding with fundraising suggests investor pressure is driving organizational changes. The departure of safety-focused leaders like Leike and Murati, combined with Krueger’s concerns about weakening “mission locks,” indicates that internal safeguards are eroding precisely when they’re most needed—as OpenAI pursues AGI.
The defense from current employees like Brown and Chen, while expected, doesn’t address the core concern: whether commercial pressures will compromise safety research. The real test will be whether OpenAI maintains adequate safety resources and independent oversight as it scales. This moment may define whether the AI industry can self-regulate or whether external governance becomes necessary.
Why This Matters
This internal conflict at OpenAI represents a critical inflection point for the entire AI industry. As the company behind ChatGPT and one of the leading developers of artificial general intelligence, OpenAI’s governance structure and safety priorities have implications far beyond its own operations.
The debate highlights a fundamental tension in AI development: balancing rapid commercialization with safety considerations. OpenAI’s shift toward a for-profit model while pursuing billions in investment raises questions about whether market pressures will compromise safety research and responsible AI development.
The exodus of senior leadership, particularly safety-focused researchers, sends concerning signals about the company’s internal culture and priorities. If top talent believes safety is being deprioritized, it could accelerate risks associated with AGI development. This matters because OpenAI is racing toward artificial general intelligence, a technology that could fundamentally transform society.
For the broader AI industry, OpenAI’s restructuring may set a precedent for how AI companies balance mission-driven goals with commercial viability. The outcome will influence investor expectations, regulatory approaches, and public trust in AI development.
Recommended Reading
For those interested in learning more about artificial intelligence, machine learning, and effective AI communication, here are some excellent resources:
Recommended Reading
Related Stories
- OpenAI CEO Sam Altman Hints at Potential Restructuring in 2024
- OpenAI’s Valuation Soars as AI Race Heats Up
- Sam Altman’s Bold AI Predictions: AGI, Jobs, and the Future by 2025
- Elon Musk Drops Lawsuit Against ChatGPT Maker OpenAI, No Explanation
- Elon Musk Warns of Potential Apple Ban on OpenAI’s ChatGPT