Trump AI Advisor Accuses Anthropic of Regulatory Capture, Ignites Fierce AI Debate
Trump's AI czar blasts Anthropic for 'regulatory capture,' igniting a fierce debate over innovation versus safety.
October 16, 2025

A pointed accusation from Donald Trump's artificial intelligence advisor has ignited a fierce debate over the future of AI regulation, pitting a pro-innovation, anti-regulation stance against a push for safety and transparency. David Sacks, a venture capitalist and the White House's AI czar, publicly accused the prominent AI firm Anthropic of orchestrating a "sophisticated regulatory capture strategy based on fear-mongering." Sacks claimed the company was "principally responsible for the state regulatory frenzy that is damaging the startup ecosystem," a charge that cuts to the core of the tension between fostering rapid technological advancement and establishing guardrails for a technology with transformative potential. The dispute highlights a fundamental ideological divide in Silicon Valley and Washington on how to govern AI, with significant implications for competition, innovation, and safety in the burgeoning industry.
At the heart of Sacks's criticism is the belief that excessive and fragmented regulation will stifle the very innovation that is crucial for American economic and national security. The Trump administration has clearly articulated a policy focused on removing barriers to AI development to maintain global dominance, particularly against competitors like China.[1][2][3] Sacks has warned that a patchwork of state-level laws creates an impossibly complex compliance landscape for new companies, effectively favoring large, well-resourced incumbents.[4][5][6] He argues that this "regulatory frenzy," which he attributes in large part to Anthropic's advocacy, could "kill this thing in the cradle."[7][8] Sacks and the administration advocate for a single, common-sense federal standard that promotes growth and avoids what they term "ideological bias" or "woke AI," which they suggest can arise from state-level requirements focused on concepts like diversity and equity.[9][3][4][6][10] The administration's AI Action Plan emphasizes deregulation and the promotion of ideologically neutral systems, viewing burdensome requirements as a threat to American technological leadership.[1][2][9]
Anthropic, for its part, has pushed back against the accusations, framing its support for regulation as a necessary step toward responsible innovation and public trust. The company was the only major AI lab to publicly endorse California's Senate Bill 53, a landmark law requiring large AI developers to publish safety frameworks, report on catastrophic risk assessments, and provide whistleblower protections.[11][12][13][14] Anthropic co-founder Jack Clark described Sacks's criticism as "perplexing," stating the company's engagement in policy discussions is meant to be substantive and fact-based. The AI firm argues that SB 53 helps create a level playing field by making basic transparency and safety disclosures mandatory, preventing a "race to the bottom" where companies might otherwise scale back on safety to compete.[11][14] While Anthropic maintains that a unified federal standard would be the ideal outcome, it views state-level action like SB 53 as a necessary backstop given the slow pace of federal legislation.[11][15] The company asserts that the bill formalizes practices that responsible AI labs already follow and is specifically designed to exempt smaller startups from unnecessary regulatory burdens, targeting only large-scale model developers with annual revenues over $500 million.[11][15][16]
The debate over regulatory capture and its impact on the AI startup ecosystem is complex. Regulatory capture is the theory that regulatory agencies may come to be dominated by the industries or interests they are charged with regulating, resulting in rules that benefit those entities over the public interest. Critics argue that by advocating for specific, complex regulations, large companies like Anthropic can erect barriers to entry that smaller competitors cannot overcome. The costs of compliance—including legal fees, documentation, and mandatory assessments—can be prohibitive for startups, diverting resources from core research and development.[17][18][19] This could lead to market consolidation and a less competitive landscape.[17] However, proponents of regulation argue that clear rules build public trust, which is essential for the broad adoption of AI technologies.[17] They contend that standards for safety and transparency can become a competitive advantage, attracting investors and customers who prioritize ethical and responsible AI. For Anthropic, whose public benefit corporation structure is built on a foundation of safety, advocating for such rules aligns with its core mission.[20]
Ultimately, the clash between Sacks's accusations and Anthropic's defense encapsulates the central policy challenge of the AI era: balancing the drive for innovation with the need for accountability. The Trump administration's approach prioritizes speed and American dominance, viewing regulation as a hindrance that could cede leadership to geopolitical rivals.[10][21] From this perspective, advocating for safety measures is seen as fear-mongering that benefits incumbents by creating a burdensome environment for agile startups. Conversely, Anthropic and its supporters argue that safety and transparency are not obstacles to progress but prerequisites for it. They believe that without baseline guardrails enshrined in law, the immense power of AI could lead to catastrophic outcomes, eroding public trust and ultimately harming the entire industry. The resolution of this debate, whether through a single federal framework or a continued patchwork of state laws, will profoundly shape the development of artificial intelligence and determine which companies lead the charge into its uncertain future.
Sources
[1]
[2]
[3]
[4]
[5]
[6]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[19]