OpenAI uses legal pressure to silence AI regulation advocates, critics claim.
OpenAI faces backlash for subpoenas targeting advocates of California's pioneering AI safety law, igniting a debate over industry control.
October 11, 2025

In a move that has sent ripples through the artificial intelligence community, OpenAI is facing accusations of employing legal pressure to intimidate advocates for stricter AI regulation.[1][2][3] Reports have emerged that the prominent AI research and deployment company has served subpoenas to several civil society groups and individuals who were instrumental in the passage of California's new AI law, SB 53.[1] Those targeted by the legal actions claim the subpoenas are a thinly veiled attempt to silence critics and chill the burgeoning movement for greater oversight of powerful AI technologies. OpenAI, however, maintains that its actions are a standard part of a separate legal dispute and are aimed at uncovering potential conflicts of interest among its detractors. The controversy highlights the growing tensions between the rapid advancement of AI and the push for meaningful accountability in the industry.
At the heart of the issue are subpoenas delivered to organizations such as Encode, a small nonprofit focused on AI policy, and The Midas Project, an AI watchdog group.[1][2][3] Nathan Calvin, the general counsel for Encode, brought the issue to public attention, alleging that a sheriff's deputy served him a subpoena at his home.[2][3] The legal demand reportedly sought access to a wide range of private communications, including messages related to the drafting and advocacy for California's SB 53, a landmark piece of legislation that imposes new transparency and safety requirements on advanced AI developers.[2][3] Similarly, the founder of The Midas Project reported receiving a subpoena demanding extensive documentation of his organization's communications.[2] Advocates who received the subpoenas view them as a form of harassment from a multi-billion-dollar corporation, intended to drain their limited resources and discourage their work.[4][2] They argue that the legal maneuver is designed to create a climate of fear among those who question the motives and safety practices of major AI labs.
OpenAI has publicly defended its use of subpoenas, framing them as a necessary component of its ongoing and contentious legal battle with Elon Musk. The company's chief strategy officer, Jason Kwon, has stated that the subpoenas are for the purpose of evidence preservation in the lawsuit where Musk has accused OpenAI of abandoning its original non-profit mission.[1] OpenAI's legal team has suggested that some of its critics may be secretly funded or influenced by competitors like Musk.[4] The subpoenas seek information about the funding and any potential connections between the advocacy groups and Musk.[4][5] Kwon has asserted that serving subpoenas is a standard legal procedure and not an attempt to initiate new lawsuits against the advocacy organizations.[1][6] The company has also stated that it did not oppose SB 53, but rather offered "comments for harmonization with other standards."[1] This defense, however, has been met with skepticism by the recipients of the subpoenas, who maintain they have no financial ties to Musk and see the legal actions as a direct response to their successful advocacy for stronger AI regulation.[2]
The backdrop for this conflict is the recent passage of California's Senate Bill 53, a pioneering law that establishes a new regulatory framework for the most advanced AI models, often referred to as "frontier models."[7][8][9] Signed into law in September, SB 53 mandates that large AI developers publicly disclose their safety protocols, report any "critical safety incidents," and assess the risks of "catastrophic harm," which includes scenarios like the creation of bioweapons or large-scale cyberattacks.[10][9][11] The law also includes significant whistleblower protections for employees who raise safety concerns.[10][12][13] Supporters of SB 53 view it as a crucial first step toward ensuring that the development of powerful AI prioritizes public safety and transparency.[8] The law's passage in California, home to many of the world's top AI companies, is seen as a potential blueprint for other states and even federal legislation, particularly in the absence of a comprehensive national AI policy.[7][8] The allegations of OpenAI's pressure tactics against the bill's supporters have therefore raised serious questions about the industry's willingness to engage constructively with regulatory efforts.
The controversy surrounding OpenAI's subpoenas has significant implications for the future of AI governance and the relationship between the tech industry and civil society. Critics argue that if a leading company like OpenAI is perceived as using its vast legal resources to intimidate smaller advocacy groups, it could have a chilling effect on public discourse and the willingness of others to challenge the industry. The situation underscores a fundamental debate within the AI field about the balance between innovation and regulation. While AI companies often express a commitment to safety, their actions are being closely scrutinized for any attempts to undermine independent oversight. The incident has also drawn commentary from within the AI community, with some, including a former OpenAI board member, expressing concern over the company's "dishonesty & intimidation tactics in their policy work."[2][6] As AI technology becomes increasingly powerful and integrated into society, the methods by which its developers engage with critics and regulators will be a critical factor in building public trust and ensuring that this transformative technology is developed and deployed responsibly.
Sources
[6]
[7]
[8]
[10]
[11]
[12]
[13]