OpenAI Safety Researcher Rosie Campbell Quits Over AGI Concerns

OpenAI has lost another key safety researcher, as Rosie Campbell announced her resignation from the company in a Substack post on Saturday. Campbell, a policy researcher who focused on critical AI safety issues, cited the October departure of Miles Brundage and the subsequent dissolution of the AGI Readiness team as primary factors in her decision to leave.

The AGI Readiness team played a crucial role at OpenAI, advising the company on humanity’s capacity to safely manage artificial general intelligence (AGI)—a theoretical form of AI that could match or exceed human intelligence. Following Brundage’s exit, the team was disbanded and its members were redistributed across different departments within the company.

In her departure announcement, Campbell echoed Brundage’s reasoning, expressing a desire for greater freedom to address industry-wide challenges. “I’ve always been strongly driven by the mission of ensuring safe and beneficial AGI, and after Miles’s departure and the dissolution of the AGI Readiness team, I believe I can pursue this more effectively externally,” she wrote.

During her tenure at OpenAI, Campbell worked on frontier policy issues including dangerous capability evaluations, digital sentience, and governing agentic systems. She praised the company for supporting “neglected, slightly weird kind of policy research” that becomes critical when considering the possibility of transformative AI.

However, Campbell revealed she has “been unsettled by some of the shifts” in OpenAI’s trajectory over the past year. The company’s September announcement to transition from its nonprofit structure to a for-profit model—nearly a decade after launching as a nonprofit dedicated to creating AGI—has raised concerns among former employees about whether the company is compromising its original mission.

CEO Sam Altman has defended these changes, arguing that OpenAI needs “vastly more capital” than it could attract as a nonprofit to achieve its goals. He has also stated that setting AI safety standards shouldn’t be OpenAI’s sole responsibility, telling Fox News Sunday that “it should be a question for society.”

Campbell’s departure continues a troubling trend for OpenAI. Since Altman’s brief ousting last year, several high-profile researchers have left the company, including cofounder Ilya Sutskever, Jan Leike, and John Schulman—all expressing concerns about OpenAI’s commitment to safety research.

Key Quotes

I’ve always been strongly driven by the mission of ensuring safe and beneficial AGI and after Miles’s departure and the dissolution of the AGI Readiness team, I believe I can pursue this more effectively externally

Rosie Campbell explained her primary reason for leaving OpenAI, suggesting that the company’s restructuring has made it more difficult to pursue meaningful safety research internally. This statement underscores concerns about OpenAI’s evolving priorities.

During my time here I’ve worked on frontier policy issues like dangerous capability evals, digital sentience, and governing agentic systems, and I’m so glad the company supported the neglected, slightly weird kind of policy research that becomes important when you take seriously the possibility of transformative AI

Campbell acknowledged OpenAI’s past support for unconventional safety research while highlighting the critical areas she focused on. This quote reveals the sophisticated nature of safety work being conducted at the company.

The simple thing was we just needed vastly more capital than we thought we could attract — not that we thought, we tried — than we were able to attract as a nonprofit

Sam Altman defended OpenAI’s controversial transition to a for-profit structure in a Harvard Business School interview, arguing that the company’s ambitious goals require funding levels impossible to achieve as a nonprofit organization.

It should be a question for society. It should not be OpenAI to decide on its own how ChatGPT, or how the technology in general, is used or not used

Altman pushed back against the notion that OpenAI alone should set industry safety standards, suggesting broader societal involvement is necessary. This statement comes amid criticism that the company is deprioritizing internal safety research.

Our Take

The steady departure of safety researchers from OpenAI reveals a troubling pattern that extends beyond normal employee turnover. When multiple senior experts in AI safety independently conclude they can better pursue their mission outside the company, it signals a fundamental misalignment between OpenAI’s stated values and its operational reality. The dissolution of the AGI Readiness team is particularly alarming—this wasn’t a peripheral function but a core component of responsible AGI development. While Altman’s arguments about capital needs are pragmatic, they don’t address the central concern: whether the pursuit of funding and market dominance is compromising the very safety research that makes advanced AI development responsible. The AI industry is watching closely, as OpenAI’s trajectory may set precedents for how other companies balance innovation, commercialization, and safety as AI capabilities continue to advance toward potentially transformative levels.

Why This Matters

Campbell’s resignation represents another significant blow to OpenAI’s safety infrastructure and raises serious questions about the company’s priorities as it pursues aggressive commercialization. The dissolution of the AGI Readiness team is particularly concerning, as this group was specifically tasked with ensuring humanity’s preparedness for potentially transformative AI technology.

This exodus of safety-focused researchers suggests a growing tension between OpenAI’s commercial ambitions and its founding mission to develop AGI that benefits humanity. The pattern is unmistakable: multiple senior researchers with deep expertise in AI safety are choosing to leave, citing similar concerns about the company’s direction.

For the broader AI industry, this trend highlights the fundamental challenge of balancing rapid innovation with responsible development. As AI capabilities advance and commercial pressures intensify, maintaining robust safety research becomes increasingly critical yet potentially more difficult. Campbell’s decision to pursue safety work externally may signal that independent research organizations and regulatory bodies will need to play a larger role in ensuring AI development remains aligned with human values and societal benefit.

For those interested in learning more about artificial intelligence, machine learning, and effective AI communication, here are some excellent resources:

Source: https://www.businessinsider.com/openai-safety-researcher-rosie-cambell-quits-agi-readiness-miles-brundage-2024-12