OpenAI has restructured its Safety and Security Committee, removing CEO Sam Altman and board chair Bret Taylor from the oversight group in a move toward greater independence. The revamped committee now consists exclusively of independent board members, addressing longstanding criticism that the watchdog body couldn’t effectively govern the company with its CEO as a member.
The committee was originally formed in May 2024 following a wave of high-profile departures from OpenAI, with former employees publicly expressing concerns that the ChatGPT maker was failing to govern itself responsibly regarding artificial intelligence development. The initial composition included Altman, Taylor, and five OpenAI technical and policy experts alongside independent board members—a structure that raised questions about potential conflicts of interest.
The new committee is now chaired by Zico Kolter, a professor at Carnegie Mellon University, and includes Quora co-founder and CEO Adam D’Angelo, retired US Army General Paul Nakasone, and Nicole Seligman, former general counsel of Sony Corporation. All four members also serve on OpenAI’s board of directors. According to the company’s blog post, the safety committee will “exercise oversight over model launches, including having the authority to delay a release until safety concerns are addressed.”
The restructuring comes amid mounting scrutiny of OpenAI’s governance and safety practices. The committee has already reviewed the safety assessment of o1, a new series of AI models designed to spend more time processing before responding. However, OpenAI continues to face criticism from multiple fronts. Last month, the company opposed California’s AI safety bill, arguing it would stifle innovation and drive companies out of the state—a position that disappointed former employees who had joined OpenAI specifically to ensure AI safety.
Former OpenAI researchers William Saunders and Daniel Kokotajlo wrote in a letter: “We joined OpenAI because we wanted to ensure the safety of the incredibly powerful AI systems the company is developing. But we resigned from OpenAI because we lost trust that it would safely, honestly, and responsibly develop its AI systems.”
Additional controversies have plagued the company, including whistleblowers contacting the SEC in July regarding alleged NDA violations, and nine current and former employees signing an open letter highlighting the risks of generative AI. OpenAI’s corporate structure remains confusing to Silicon Valley observers, having transitioned from a nonprofit to a capped-profit model in 2019, with recent reports suggesting a potential shift to a traditional for-profit company. The company is currently raising funds at a $150 billion valuation—exceeding the market capitalization of over 88% of S&P 500 firms, including Goldman Sachs, Uber, and BlackRock.
Key Quotes
We joined OpenAI because we wanted to ensure the safety of the incredibly powerful AI systems the company is developing. But we resigned from OpenAI because we lost trust that it would safely, honestly, and responsibly develop its AI systems.
Former OpenAI researchers William Saunders and Daniel Kokotajlo wrote this in a letter explaining their departure from the company. This quote underscores the internal concerns about OpenAI’s commitment to safety that prompted the committee restructuring.
exercise oversight over model launches, including having the authority to delay a release until safety concerns are addressed
This statement from OpenAI’s blog post defines the new committee’s powers, giving independent board members significant authority to halt AI model deployments—a crucial safeguard as the company develops increasingly powerful systems.
Our Take
The removal of Sam Altman from OpenAI’s safety committee is more than symbolic—it’s a necessary response to a credibility crisis. When the people building AI systems also control their safety oversight, conflicts of interest are inevitable. This restructuring acknowledges what critics have argued for months: effective AI governance requires independence from commercial pressures.
However, questions remain. Will an independent committee have sufficient technical expertise and access to truly evaluate cutting-edge AI systems? Can four board members effectively oversee a company racing toward AGI while valued at $150 billion? The real test will be whether this committee actually delays releases when safety concerns arise, or becomes a rubber stamp for predetermined decisions.
This move reflects a broader industry reckoning: as AI capabilities grow exponentially, the governance structures from tech’s “move fast and break things” era are inadequate. OpenAI’s restructuring may set a precedent, but only if the committee demonstrates genuine independence and authority in practice, not just on paper.
Why This Matters
This restructuring represents a critical moment in AI governance as the industry grapples with balancing rapid innovation against safety concerns. OpenAI’s decision to remove its CEO from the safety committee signals a response to mounting pressure from former employees, regulators, and the broader AI community demanding more independent oversight of powerful AI systems.
The move comes at a pivotal time when AI companies face increasing scrutiny over their development practices and potential societal risks. With OpenAI valued at $150 billion and its ChatGPT technology influencing millions globally, the company’s governance structure sets precedents for the entire AI industry. The committee’s authority to delay model releases could significantly impact the pace of AI deployment and establish new standards for safety protocols.
For businesses and policymakers, this development highlights the growing tension between AI innovation and responsible development. As generative AI becomes more powerful and integrated into critical systems, the question of who oversees these technologies—and whether that oversight is truly independent—will shape the future of AI regulation and corporate accountability across the tech sector.
Recommended Reading
For those interested in learning more about artificial intelligence, machine learning, and effective AI communication, here are some excellent resources:
Recommended Reading
Related Stories
- OpenAI CEO Sam Altman Hints at Potential Restructuring in 2024
- OpenAI’s Valuation Soars as AI Race Heats Up
- Sam Altman’s Bold AI Predictions: AGI, Jobs, and the Future by 2025
- Elon Musk Drops Lawsuit Against ChatGPT Maker OpenAI, No Explanation
- Elon Musk Warns of Potential Apple Ban on OpenAI’s ChatGPT