OpenAI Hires Anthropic Safety Chief to Tame "Extremely Powerful" New AI Models.

OpenAI poaches top safety leader from rival Anthropic, bracing for catastrophic risks from next-generation models.

February 4, 2026

OpenAI Hires Anthropic Safety Chief to Tame "Extremely Powerful" New AI Models.
OpenAI has appointed Dylan Scandinaro, a prominent figure in AI safety who previously worked at rival firm Anthropic, as its new Head of Preparedness, signaling an intense escalation in the race for specialized talent to manage the severe risks associated with next-generation artificial intelligence. The high-profile hiring, announced by CEO Sam Altman, comes with an explicit warning that the company is bracing to work with "extremely powerful models" soon and requires "commensurate safeguards" to ensure their continued benefit to humanity. This key appointment fills a crucial, high-stakes role that had been vacant for several months following a period of safety leadership turnover, underscoring OpenAI's efforts to stabilize and strengthen its risk mitigation framework as its technological progress accelerates.[1][2][3][4]
The Head of Preparedness role, which was publicly advertised with a salary of up to $555,000 a year, is central to OpenAI's strategy for confronting "catastrophic risks."[5][6][4] Scandinaro will be tasked with leading efforts to identify, test, and reduce the potential for severe harm posed by advanced AI systems, encompassing a wide range of potential dangers. The Preparedness team's mandate involves tracking and preparing for frontier capabilities that could lead to widespread harm across tracked categories, including risks related to cybersecurity, biological and chemical capabilities, and, most speculatively, the challenges of human control over potential AI self-improvement.[7][8][9] The former Anthropic technical team member’s move highlights the recognition that the development of Artificial General Intelligence (AGI), which both companies are pursuing, presents "risks of extreme and even irrecoverable harm," a warning Scandinaro himself echoed upon joining OpenAI.[3][10][11]
Scandinaro's background at Anthropic, a company founded by former OpenAI employees and explicitly positioned as a safety-first AI developer, provides a direct injection of alignment expertise into OpenAI’s core leadership.[6][12] Anthropic, led by siblings Dario and Daniela Amodei, distinguished itself in the industry through a strong focus on guardrails, including the development of a constitutional AI framework aimed at ensuring deployed systems do not behave unpredictably.[6] This focus contrasts with a narrative that has sometimes dogged OpenAI, which has experienced notable internal departures over the years, including several key researchers and co-founders who cited concerns that the company was prioritizing rapid commercial scaling over safety and long-term alignment, with many of those individuals going on to join Anthropic.[12][13][14] By successfully recruiting a senior member from its most philosophically aligned competitor, OpenAI has not only gained a highly-regarded expert whom Altman called "by far the best candidate," but also scored a significant victory in the ongoing, high-stakes talent war for AI safety researchers.[1][6][3]
The timing of the appointment is particularly telling, coinciding with a period where OpenAI's safety leadership has seen significant movement. The Head of Preparedness role had been open since the reassignment of its former leader to a role focused on AI reasoning.[7][4] The gap in permanent leadership came amidst broader scrutiny, as other safety-focused personnel shifted roles or left the company entirely.[7][15] This instability had put pressure on the organization's Preparedness Framework, which is designed to provide a systematic approach to tracking and mitigating risks from frontier AI. The framework operates with a system of risk "scorecards" evaluated at major computational milestones, with predefined thresholds dictating the necessary precautions for model deployment.[8][9] Scandinaro’s arrival suggests a commitment to re-solidify these processes just as Sam Altman publicly states that development is about to "move quite fast." The CEO's comments imply that the next generation of models is nearing release, models whose capabilities will demand a robust, cross-functional safety leader to implement company-wide changes and address the severe risks anticipated.[1][16] The immediate challenge for Scandinaro will be integrating the deeply embedded safety principles honed at Anthropic into OpenAI's fast-paced development culture and ensuring the company's preparedness standards—which include proactive safety testing and outside audits—can keep pace with unprecedented technological advancement.[6][8][9]
The competition between OpenAI and Anthropic, once characterized by the founding of Anthropic as an exodus of safety-conscious talent from OpenAI, has now entered a phase of aggressive, reciprocal talent acquisition, focusing on the specialized field of AI safety. This struggle for the top minds in alignment research reflects the industry's consensus on the urgency of mitigating extreme risks before the advent of AGI. Scandinaro’s immediate responsibility will be to oversee safeguards as OpenAI transitions into what its leadership characterizes as an era of "extremely powerful models," a clear indication that the company perceives its own technological progress as having reached a new, more hazardous frontier. The success of this appointment will serve as a critical test of OpenAI's renewed commitment to safety as it races to deploy its most advanced creations, a deployment that could reshape the entire technological and societal landscape.

Sources
Share this article