OpenAI is struggling to fill a critical safety position that pays $555,000 annually plus equity, as industry experts warn the role may be nearly impossible to execute successfully. The head of preparedness position has become vacant since Aleksander Madry transitioned to a new role in July 2024, leaving a crucial gap in the company’s Safety Systems team.
The challenge lies in the inherent tension between AI safety concerns and CEO Sam Altman’s aggressive product release schedule. In 2024 alone, OpenAI launched Sora 2 video app, Instant Checkout for ChatGPT, new AI models, developer tools, and advanced agent capabilities—demonstrating the company’s breakneck pace of innovation.
Maura Grossman, a research professor at the University of Waterloo’s School of Computer Science, described the role as “close to an impossible job” because the person filling it will need to tell Altman to slow down or abandon certain goals—essentially “rolling a rock up a steep hill.” Even Altman himself acknowledged the position’s intensity, writing on X that “this will be a stressful job, and you’ll jump into the deep end pretty much immediately.”
The job posting notably lacks traditional requirements like college degrees or minimum years of experience. Instead, OpenAI seeks someone who has led technical teams, can make high-stakes technical judgments under uncertainty, align diverse stakeholders around safety decisions, and possesses deep expertise in machine learning, AI safety, evaluation, security, or adjacent risk domains.
Richard Lachman, a professor of digital media at Toronto Metropolitan University, suggests OpenAI needs a seasoned tech-industry executive rather than an academic type, who tend to be more cautious and risk-averse. He expects the company to seek someone who can protect its public image regarding safety while allowing continued rapid innovation and growth.
The urgency for this role comes amid growing safety concerns. Several prominent early employees, including a former head of the safety team, have resigned over OpenAI’s approach to safety. The company faces lawsuits alleging its technology reinforces delusions and drives harmful behavior. In October 2024, OpenAI acknowledged that some ChatGPT users exhibited possible signs of mental health problems, prompting collaboration with mental health experts to improve responses to users showing signs of psychosis, mania, self-harm, suicide, or emotional attachment.
Key Quotes
This is close to an impossible job, because at times the person in it will likely need to tell Altman to slow down or that certain goals shouldn’t be met. They’ll be rolling a rock up a steep hill.
Maura Grossman, research professor at the University of Waterloo’s School of Computer Science, explains why the head of preparedness role presents such extraordinary challenges, highlighting the inherent conflict between safety oversight and OpenAI’s rapid product development pace.
This will be a stressful job, and you’ll jump into the deep end pretty much immediately.
Sam Altman, OpenAI’s CEO, candidly acknowledged the intensity of the position on X (formerly Twitter), providing rare transparency about the demanding nature of the role and the immediate pressures the new hire will face.
This is not quite a ‘yes person,’ but somebody who’s going to be on brand.
Richard Lachman, professor of digital media at Toronto Metropolitan University, describes the delicate balance OpenAI seeks—someone who will protect the company’s safety image while not significantly impeding its aggressive growth and innovation strategy.
Our Take
The struggle to fill this position exposes a critical vulnerability in OpenAI’s governance structure and raises questions about whether meaningful AI safety oversight is compatible with Silicon Valley’s “move fast” culture. The $555,000 salary—generous by most standards—may actually be insufficient compensation for the professional risk involved in a role where success means constantly pushing back against a CEO known for rapid deployment.
This situation mirrors broader patterns across the AI industry, where safety teams are often under-resourced and overruled. The resignation of prominent safety researchers and the acknowledgment of mental health concerns among users suggest OpenAI’s current approach may be inadequate. The next head of preparedness will essentially be testing whether corporate AI safety roles can have real teeth, or whether they’re primarily performative positions designed to reassure regulators and the public while business continues as usual.
Why This Matters
This story reveals fundamental tensions at the heart of AI development: the conflict between rapid innovation and responsible safety practices. As OpenAI leads the generative AI revolution, its struggle to fill this critical safety role exposes broader industry challenges around AI governance and accountability.
The position’s difficulty reflects a systemic problem in the AI industry—companies racing to deploy powerful technologies while safety infrastructure struggles to keep pace. The fact that even a $555,000 salary may not attract qualified candidates suggests deep concerns about whether such roles can be effective when corporate pressure favors speed over caution.
For businesses and society, this matters because OpenAI’s products affect millions of users globally. The acknowledged mental health concerns and employee resignations signal real risks that extend beyond theoretical debates. How OpenAI resolves this leadership gap will likely influence industry-wide standards for AI safety governance and set precedents for how tech companies balance innovation with responsibility in the age of increasingly powerful AI systems.
Related Stories
- OpenAI Lost Nearly Half of Its AI Safety Team, Ex-Researcher Says
- Sam Altman and Jony Ive Partner on AI Device, Target $1B Funding
- OpenAI’s Competition Sparks Investor Anxiety Over Talent Retention at Microsoft, Meta, and Google
- CEOs Express Insecurity About AI Strategy and Implementation
- Goldman Sachs Hires Google’s Melissa Goldman as Tech Head for AI Push
Source: https://www.businessinsider.com/challenges-of-openai-head-of-preparedness-role-2025-12