Anthropic breaks ranks, backs California's AI safety bill amid federal void.
Anthropic challenges rivals, backing California's AI transparency bill that could shape future national regulation amid federal stalls.
September 8, 2025

In a significant move that highlights the growing urgency for artificial intelligence regulation, AI safety and research company Anthropic has announced its support for California's Senate Bill 53, a legislative proposal aimed at increasing transparency and safety in the development of advanced AI models. This endorsement positions Anthropic in stark contrast to some of its major competitors and underscores a pivotal moment in the governance of artificial intelligence, with the company citing the sluggish pace of federal action as a key motivator for backing state-level intervention. Anthropic's stance suggests a belief that the rapid advancements in AI cannot afford to wait for a consensus to form in Washington, making California's initiative a necessary and timely step forward.[1][2]
At the heart of Anthropic's decision is the argument that while federal oversight of AI is the ideal scenario, the absence of a comprehensive national standard necessitates more immediate action.[1][3] The company has expressed that California's SB 53 could serve as a solid blueprint for future national regulations.[3] This move comes after the failure of a more stringent predecessor bill, SB 1047, which was vetoed by Governor Gavin Newsom after facing significant opposition from many in the tech industry.[4] SB 53 is seen as a more measured and "watered-down" version, focusing primarily on transparency rather than the prescriptive technical mandates and liability provisions that characterized SB 1047 and drew the ire of its critics.[5][4] The revised bill requires large AI developers to publish their safety and security protocols, report critical safety incidents to the state, and offers robust whistleblower protections for employees who raise concerns about potential risks.[6][1] By focusing on these aspects, SB 53 attempts to strike a balance between fostering innovation and ensuring public safety, a key consideration that has been at the center of the AI regulation debate.[7]
The perceived inaction at the federal level provides a crucial backdrop to Anthropic's support for California's legislative efforts. While there have been numerous discussions and proposed bills in Washington D.C., a comprehensive federal framework for AI regulation remains elusive.[8] The U.S. Congress has struggled to pass substantial AI legislation, with partisan conflicts and the rapid pace of technological advancement creating significant hurdles.[8] This has led to a "patchwork" of state-level initiatives, with states like Colorado, New York, and Texas forging ahead with their own regulations.[9][10] Some in Washington have even proposed a moratorium on state-level AI regulation, arguing that a fragmented legal landscape could stifle innovation and hinder the United States' competitiveness, particularly with China.[11][12] However, the failure of such a moratorium to pass the Senate has left the door open for states like California to take the lead.[9] Proponents of state-level action argue that it allows for more nimble and responsive governance in the face of rapidly evolving technology.[9]
Anthropic's endorsement of SB 53 has not been met with universal acclaim within the AI industry. Other major players, including OpenAI and Google, have historically pushed back against state-level regulations, advocating for a more uniform federal approach.[3] Opponents of SB 53, including tech trade groups like the Chamber of Progress and TechNet, argue that the bill's requirements could still be burdensome, potentially forcing companies to disclose trade secrets and creating a challenging compliance environment.[7] They express concern that even a transparency-focused bill could inadvertently stifle innovation, particularly for smaller companies, and create a "California-only" set of rules that could disadvantage the state's thriving tech ecosystem.[7][3] These differing views highlight a fundamental division within the AI industry on the best path forward for regulation, with some prioritizing immediate safety guardrails and others emphasizing the need to maintain a competitive edge in a global market.
The implications of California passing a law like SB 53, especially with the backing of a prominent AI lab like Anthropic, could be far-reaching. As the home of Silicon Valley and a global hub for AI development, California's legislative actions often set a precedent for other states and even other countries.[13] The passage of SB 53 could create a de facto national standard, compelling other states and perhaps even the federal government to adopt similar transparency and safety requirements. This could lead to a "California effect," where companies that comply with California's regulations find it easier to adapt to new rules in other jurisdictions. Furthermore, Anthropic's proactive stance on regulation may signal a shift in how AI companies approach their responsibilities, moving from a purely innovation-focused mindset to one that more deeply integrates safety and public trust.[14] The success or failure of SB 53 will undoubtedly be a closely watched development, with the potential to shape the future of AI governance in the United States and beyond.