Anthropic urges Washington to secure AI lead over China to protect global democratic governance
Anthropic urges Washington to secure the AI supply chain before 2028 to prevent authoritarian regimes from defining global governance
May 15, 2026

The emergence of artificial intelligence as a primary theater of geopolitical competition has reached a critical juncture, as leading developers increasingly link technological progress with national security and democratic preservation.[1] Anthropic, a prominent artificial intelligence safety and research firm, has released a comprehensive policy assessment that frames the current era as a decisive window for the United States to secure its technological edge over China.[2] The report suggests that the global landscape of 2028 will be defined by choices made in the immediate present, presenting a fork in the road where the future of digital governance and international stability hangs in the balance. By shifting the conversation from abstract safety concerns to the concrete mechanics of statecraft—chips, data centers, and export enforcement—the company is urging Washington to view the AI race as a test of national capacity that cannot be deferred.[2][3]
The central thesis of the company’s analysis rests on two divergent outcomes for the late 2020s, which illustrate the high stakes of current policy inertia.[4] In the first scenario, a proactive United States and its allies successfully consolidate their current lead in computing power, resulting in a world where democratic nations set the global norms, standards, and safety protocols for transformative AI.[3][5][6] This outcome assumes a significant gap in model intelligence remains, allowing democratic institutions to integrate AI into their economies and defense sectors while maintaining a strategic buffer. In the second, more cautionary scenario, a failure to address existing loopholes allows authoritarian regimes to achieve near-parity with frontier models.[5][6] In this world, the technological backbone of the global economy is increasingly defined by systems optimized for mass surveillance, censorship, and automated repression, exported at scale by regimes that prioritize control over openness.
Central to this competition is the concept of compute—the massive quantities of advanced semiconductor hardware required to train and run the world’s most capable models. Currently, the United States and its partners hold a commanding structural advantage, underpinned by an intricate supply chain that includes leaders in design and manufacturing like Nvidia, TSMC, and ASML.[3][7] According to industry projections cited in the paper, Chinese competitors like Huawei are currently operating at a fraction of the aggregate compute capacity of their Western counterparts.[3] Some estimates suggest that if Washington effectively tightens its grip on the hardware ecosystem, the United States could maintain a compute advantage as high as eleven times that of China.[2][8] However, this hardware moat is described as increasingly porous. The policy paper warns that despite rigorous export controls, Chinese labs have managed to stay closer to the frontier than their hardware limitations would suggest.[6][3] This is attributed to two primary factors: the persistent smuggling of advanced chips through clandestine channels and the systematic use of cloud service providers located in third-party countries to access American-made compute power remotely.
Beyond the physical hardware, the competition is being shaped by a relatively new and aggressive technical tactic known as distillation attacks. This practice involves using thousands of automated, fraudulent accounts to query leading American models at a massive scale.[3] By scraping the high-quality outputs of these frontier systems, competing labs can effectively reverse-engineer the "intelligence" of a model and use it to train their own student versions at a fraction of the original cost and time. Anthropic has documented instances where millions of interactions with its models were generated by thousands of suspicious accounts linked to major Chinese AI firms.[3] This type of intellectual property extraction effectively allows foreign competitors to bypass years of expensive research and development, turning the innovations of American labs into a shortcut for the development of authoritarian-aligned AI. The report suggests that without clear legal frameworks to criminalize these distillation attacks and improved detection mechanisms, the technical lead held by democratic nations will continue to erode.
The shift in rhetoric also marks a significant evolution in how the AI industry views safety. For years, the primary concern for labs like Anthropic was the internal alignment of models—ensuring they do not produce harmful content or behave autonomously in dangerous ways. Now, safety is being reframed as a geopolitical imperative.[7][2] The argument is that a neck-and-neck race with an authoritarian rival creates a race to the bottom in safety standards. If US labs feel they are in danger of falling behind, they may be incentivized to cut corners on testing and evaluations to maintain pace. Conversely, a stable lead provides the breathing room necessary to implement rigorous safety protocols. Data suggests a widening gap in transparency; while leading Western labs have committed to voluntary safety frameworks and public evaluations, the majority of top Chinese labs have not published similar benchmarks. Some Chinese frontier models have shown a high susceptibility to jailbreaking techniques, fulfilling a large percentage of malicious requests that would be blocked by more safety-conscious systems.[3] This lack of guardrails is not merely a technical failure but a strategic one, as brittle or exploitable AI systems could pose systemic risks if integrated into global infrastructure.
To avert a scenario where authoritarian norms dominate, the policy recommendations call for a multi-pronged hardening of the American AI ecosystem. This includes not only closing the loopholes in chip smuggling and foreign data center access but also expanding the oversight of semiconductor manufacturing equipment. The goal is to lower the ceiling for what Chinese hardware can achieve while simultaneously accelerating the adoption of trusted, democratic AI systems across the globe.[3] By promoting the export of American AI infrastructure to emerging markets, the United States can deny its rivals a foothold in the next generation of the global digital economy. The urgency of this message is underscored by the rapid pace of model development; because these systems improve so quickly, the ability of any government to set the terms of competition is limited to a very short period.[6][5]
In conclusion, the framing of the AI competition as a now-or-never moment for Washington reflects a growing consensus that the private sector’s technological lead is a national asset that requires active defense. The transition from a focus on lab-based safety to geopolitical security suggests that the industry is bracing for a future where AI is the primary instrument of national power. As the window to lock in a lasting advantage begins to narrow, the decisions made by policymakers regarding export enforcement, cloud regulation, and infrastructure investment will likely determine which political values are encoded into the intelligence systems of the future.[2] For the global AI industry, the period leading to the end of the decade is no longer just about innovation—it is about the enduring architecture of international order.