OpenAI's Mission Betrayed: Insiders Warn Profit Trumps AI Safety

Alarms from former staff: OpenAI's relentless pursuit of profit now overshadows safety, endangering humanity's AI future.

June 19, 2025

OpenAI's Mission Betrayed: Insiders Warn Profit Trumps AI Safety
A growing chorus of former employees from OpenAI, the world's most prominent artificial intelligence lab, is raising alarms that the organization is betraying its founding mission of ensuring artificial general intelligence benefits all of humanity. In a series of public letters and statements, these ex-staffers allege that a relentless pursuit of profit and a culture of secrecy have supplanted the company's original commitment to safety and ethics. The concerns, detailed in what has been informally dubbed "The OpenAI Files," paint a picture of a company at a critical crossroads, where the race for technological supremacy and financial returns may be dangerously overshadowing the profound risks of the very technology it is creating. These insiders warn that without significant changes to its governance and a renewed focus on public accountability, OpenAI could be steering toward a future where powerful AI systems are developed without adequate safeguards, posing potential risks that range from exacerbating societal inequalities to catastrophic, unforeseen consequences.
A primary focus of the ex-employees' campaign is the call for greater transparency and stronger protections for whistleblowers. In a public letter, a group of 13 former and current employees from OpenAI and Google DeepMind argued that AI companies have strong financial incentives to avoid effective oversight.[1][2] They contend that confidentiality agreements are being used to silence employees who have concerns about the risks of the technology.[3] The letter, endorsed by AI pioneers like Geoffrey Hinton and Yoshua Bengio, calls on AI companies to cease enforcing non-disparagement agreements that could penalize former employees by stripping them of vested equity for speaking out.[4][5] The signatories, some of whom remained anonymous fearing retaliation, are demanding the establishment of a verifiable anonymous process for raising concerns with a company's board, regulators, and other independent expert organizations.[3][5] This push for the "right to warn" stems from a belief that current employees are among the few individuals with sufficient knowledge to hold these powerful companies accountable, yet are often prevented from doing so.[4][2] One former employee, Daniel Kokotajlo, stated he left OpenAI after losing hope that the company would act responsibly as it approaches artificial general intelligence (AGI), accusing it and others of adopting a "move fast and break things" approach.[4]
Further escalating the pressure, another contingent of former OpenAI employees has taken legal and regulatory action to challenge the company's corporate structure.[6] Ten former staffers, supported by three Nobel laureates and other AI experts, sent a letter to the attorneys general of California and Delaware, where OpenAI is incorporated and headquartered, respectively, urging them to block the company's planned conversion to a for-profit public benefit corporation.[6][7][8] This group argues that such a restructuring would irrevocably undermine the original nonprofit's mission to prioritize public benefit over shareholder returns.[6][9] They fear that once the company is no longer primarily accountable to its public mission, crucial safeguards could "vanish overnight."[6] Page Hedley, a former policy and ethics adviser at OpenAI, voiced concerns about who will ultimately own and control the technology, suggesting that the company has increasingly taken shortcuts on safety testing to outpace competitors.[9] In response to this pressure, OpenAI announced it was reversing course on a full conversion, stating its nonprofit arm would continue to control the company.[10] However, the plan still involves its for-profit arm becoming a public benefit corporation, a structure that the former employees argue still prioritizes financial returns to an unacceptable degree.[10][9]
The internal turmoil at OpenAI has been further highlighted by a series of high-profile departures and changes to its safety-focused teams. Jan Leike, who co-led the company's Superalignment team dedicated to managing the long-term risks of superintelligent AI, resigned, stating that "safety culture and processes have taken a backseat to shiny products."[11][12] He claimed his team had been "sailing against the wind" and struggling to get the necessary resources to perform its crucial research.[13][14][12] His departure followed that of OpenAI co-founder and chief scientist Ilya Sutskever, another key figure in the company's safety efforts.[13][14] In the wake of these resignations, OpenAI announced the dissolution of the Superalignment team and the formation of a new Safety and Security Committee.[15][16][14][17] This committee, led by CEO Sam Altman and other board members and company insiders, is tasked with making recommendations on safety and security.[15][16][18][19] However, the composition of the committee, largely made up of individuals who supported Altman during his brief ouster, has drawn criticism for potentially lacking independent oversight.[20][18] Critics suggest this new structure may do little to address the core concerns about the prioritization of profits over safety.[20]
The confluence of these events paints a concerning picture for the future of AI development, not just at OpenAI but across the industry. The former employees' actions have brought to light the inherent conflict between the immense financial opportunities of advanced AI and the profound, and in some cases existential, risks it may pose. While OpenAI maintains that its structure ensures the nonprofit's mission remains central and that it is committed to robust safety practices, the claims from those who have recently departed suggest a significant cultural shift has occurred within the influential lab.[6][7][4] The ongoing debate raises fundamental questions about corporate governance, accountability, and the effectiveness of self-regulation in an industry developing technology with the potential to reshape society. The outcome of these challenges—whether through legal intervention, internal reform, or continued public pressure—will likely have a lasting impact on the trajectory of artificial intelligence and the public's trust in the organizations building it.

Research Queries Used
The OpenAI Files report ex-staff claims
OpenAI former employees safety concerns
OpenAI profit vs safety allegations
OpenAI response to former employees' letter
details of OpenAI's safety and security committee
OpenAI former employees open letter whistleblower protection
OpenAI for-profit conversion former employees letter attorneys general
OpenAI Safety and Security Committee members and mandate
Jan Leike and Ilya Sutskever departure reasons from OpenAI
OpenAI's response to accusations of prioritizing profit over safety
Share this article