OpenAI Loses Another Top Safety Researcher as Miles Brundage Exits

Miles Brundage, a senior policy advisor and head of OpenAI’s AGI Readiness team, has announced his departure from the company, marking another significant exit from the AI giant’s safety research division. Brundage revealed his decision on Wednesday through a post on X (formerly Twitter) and an accompanying Substack article detailing his reasons for leaving.

The AGI Readiness team, which Brundage led, will be completely disbanded, with team members being redistributed across other departments within OpenAI. This organizational change represents a significant shift in how the company approaches artificial general intelligence (AGI) preparedness and safety research.

Brundage’s departure continues a troubling pattern of high-profile safety researcher exits from OpenAI. In May 2024, the company dissolved its entire Superalignment team, which had been dedicated to studying the risks associated with artificial superintelligence. This dissolution followed the departures of the team’s two leaders, Jan Leike and Ilya Sutskever, both prominent figures in AI safety research.

The exodus extends beyond safety researchers. Recent months have seen the departure of several key executives, including Mira Murati (Chief Technology Officer), Bob McGrew (Chief Research Officer), and Barret Zoph (Vice President of Research). OpenAI did not respond to requests for comment regarding these departures.

For six years, Brundage served as a crucial advisor to OpenAI’s leadership, counseling executives and board members on preparing for AGI—artificial intelligence that matches or exceeds human cognitive abilities. Many experts believe such technology could fundamentally reshape society, making Brundage’s role particularly critical.

Brundage’s contributions to AI safety have been substantial. He pioneered several of OpenAI’s most important safety innovations, including the implementation of external red teaming, a process where independent experts examine OpenAI products for potential vulnerabilities and risks before public release.

In explaining his departure, Brundage cited a need for greater independence and freedom to publish his research. He specifically mentioned disagreements with OpenAI regarding limitations on what research he could publish publicly, stating that “the constraints have become too much.” He also acknowledged that working within OpenAI had potentially biased his research perspective, making it difficult to maintain impartiality when analyzing AI policy issues.

Perhaps most concerning, Brundage described a culture within OpenAI where “speaking up has big costs and that only some people are able to do so,” suggesting potential internal tensions around transparency and open discussion of safety concerns.

Key Quotes

the constraints have become too much

Miles Brundage explained his decision to leave OpenAI, specifically referring to limitations placed on what research he was allowed to publish publicly. This statement highlights potential tensions between OpenAI’s desire for control over information and researchers’ need for academic freedom.

speaking up has big costs and that only some people are able to do so

Brundage described the internal culture at OpenAI in his X post, suggesting an environment where employees may face consequences for raising concerns. This characterization is particularly troubling given OpenAI’s mission to ensure AGI benefits all of humanity.

Our Take

The pattern emerging at OpenAI is deeply concerning for the AI safety community. When a company dissolves two major safety teams within months and loses numerous safety-focused researchers, it suggests a fundamental shift in priorities. Brundage’s specific mention of publishing constraints and the “costs” of speaking up points to a culture that may be prioritizing competitive advantage over transparent safety research. This is particularly problematic given OpenAI’s transition from a nonprofit to a for-profit structure and its intense competition with companies like Anthropic and Google. The irony is stark: as OpenAI’s models become more powerful and potentially closer to AGI, the infrastructure for ensuring their safety appears to be weakening. This exodus may ultimately strengthen the case for external AI regulation, as internal safety mechanisms seem insufficient when they conflict with business objectives.

Why This Matters

This departure signals growing concerns about OpenAI’s commitment to AI safety research at a critical moment when the company is racing to develop increasingly powerful AI systems. The dissolution of both the Superalignment team and the AGI Readiness team, combined with the exodus of safety-focused researchers, raises questions about whether commercial pressures are overshadowing safety considerations.

Brundage’s comments about constraints on publishing and the costs of speaking up are particularly alarming for an industry where transparency and open dialogue about risks are essential. As AI systems become more powerful and integrated into society, independent safety research becomes increasingly crucial. His departure to gain more freedom suggests potential conflicts between OpenAI’s business interests and unfettered safety research.

The broader AI industry is watching these developments closely, as OpenAI has positioned itself as a leader in responsible AI development. If top safety researchers feel they cannot operate effectively within the company, it may influence how other organizations approach AI safety and could impact regulatory discussions. This trend also affects public trust in AI development and raises important questions about corporate governance in the AI sector as we approach potentially transformative technological capabilities.

For those interested in learning more about artificial intelligence, machine learning, and effective AI communication, here are some excellent resources:

Source: https://www.businessinsider.com/another-safety-researcher-is-leaving-openai-2024-10