OpenAI has made a significant hire in its AI safety operations, bringing on Dylan Scandinaro from rival AI lab Anthropic to serve as its new head of preparedness. The position comes with a substantial compensation package of up to $555,000 plus equity, reflecting the critical importance OpenAI places on AI safety as its models become increasingly powerful.
Sam Altman announced the appointment on X (formerly Twitter) on Wednesday, expressing his enthusiasm about the hire. “Things are about to move quite fast and we will be working with extremely powerful models soon,” Altman wrote, emphasizing that Scandinaro “is by far the best candidate I have met, anywhere, for this role.” The OpenAI CEO’s comments suggest the company is preparing for significant advances in its AI capabilities that will require robust safety oversight.
Scandinaro, a former AI safety researcher at Anthropic, acknowledged the gravity of his new position in his own post on X. “AI is advancing rapidly. The potential benefits are great — and so are the risks of extreme and even irrecoverable harm,” he stated, while expressing gratitude for his time at Anthropic and the colleagues he worked with there.
The role itself is described as highly demanding. Last month, Altman characterized the job as “stressful,” warning that candidates would “jump into the deep end almost immediately.” According to the job posting, OpenAI is seeking someone who can lead technical teams, make high-stakes decisions under uncertainty, and align competing stakeholders around safety decisions. The ideal candidate should possess deep expertise in machine learning, AI safety, and related risk areas.
This hire comes amid growing tensions over OpenAI’s approach to safety. Several early employees, including a former head of its safety team, have departed the company in recent years. The organization has also faced legal challenges from users who allege its tools contributed to harmful behavior.
In October, OpenAI revealed concerning statistics about ChatGPT usage, stating that an estimated 560,000 users per week show “possible signs of mental health emergencies.” The company indicated it was consulting mental health specialists to improve how the chatbot responds when users display signs of psychological distress or unhealthy dependence on the platform.
Key Quotes
Things are about to move quite fast and we will be working with extremely powerful models soon. Dylan will lead our efforts to prepare for and mitigate these severe risks. He is by far the best candidate I have met, anywhere, for this role.
Sam Altman, OpenAI’s CEO, made this statement when announcing Scandinaro’s appointment on X. The comment reveals OpenAI’s plans for significant near-term advances in AI capabilities and underscores the critical importance of the preparedness role.
AI is advancing rapidly. The potential benefits are great — and so are the risks of extreme and even irrecoverable harm.
Dylan Scandinaro shared this perspective in his announcement post about joining OpenAI. His acknowledgment of “irrecoverable harm” highlights the existential nature of the risks he’ll be tasked with managing in his new role.
You’ll jump into the deep end almost immediately.
Sam Altman used this phrase last month when describing the head of preparedness position as “stressful.” It illustrates the urgent and high-pressure nature of AI safety work at OpenAI as the company develops increasingly powerful systems.
Our Take
This hire represents more than just filling a vacancy—it’s a strategic move that could reshape OpenAI’s safety culture. Bringing in talent from Anthropic, a company explicitly founded on AI safety principles, sends a powerful message about OpenAI’s renewed commitment to responsible development. However, the real test will be whether Scandinaro is empowered to actually slow down or halt deployments when necessary.
The $555,000 salary is telling: it positions safety work as equally valuable as cutting-edge research, which hasn’t always been the case in AI labs. Yet with 560,000 weekly users showing mental health distress signals, OpenAI faces a credibility gap. The company must demonstrate that this isn’t just expensive window dressing but a genuine shift toward prioritizing safety over speed. As Altman hints at “extremely powerful models” coming soon, Scandinaro’s effectiveness in this role could determine whether OpenAI successfully navigates the treacherous path between innovation and responsibility.
Why This Matters
This appointment represents a critical moment for AI safety governance as leading AI companies race to develop increasingly powerful models. The eye-catching salary package—among the highest for safety roles in the industry—signals that OpenAI is taking preparedness seriously amid mounting criticism about its safety practices.
The hire is particularly significant given the ongoing talent war between major AI labs like OpenAI and Anthropic, with both companies competing for top safety researchers. Scandinaro’s move from Anthropic, a company founded partly on safety concerns with OpenAI’s approach, adds an interesting dimension to this competitive landscape.
The timing is crucial as OpenAI prepares to deploy “extremely powerful models,” according to Altman’s comments. With 560,000 weekly users showing signs of mental health distress and multiple safety-related departures in recent years, the company faces pressure to demonstrate its commitment to responsible AI development. This hire could help rebuild trust with regulators, researchers, and the public as AI capabilities advance rapidly and the potential for both benefit and harm increases exponentially.
Related Stories
- OpenAI Lost Nearly Half of Its AI Safety Team, Ex-Researcher Says
- OpenAI’s Competition Sparks Investor Anxiety Over Talent Retention at Microsoft, Meta, and Google
- Sam Altman and Jony Ive Partner on AI Device, Target $1B Funding
- AI Pioneer Geoffrey Hinton Warns of Superintelligent AI by 2025
- Goldman Sachs Hires Google’s Melissa Goldman as Tech Head for AI Push