Anthropic bans China, Russia, Iran, N. Korea-controlled firms from its AI.
Anthropic bans state-controlled entities from its AI, marking private firms as geopolitical gatekeepers in an escalating tech rivalry.
September 5, 2025

In a significant move underscoring the escalating role of artificial intelligence in global geopolitics, AI safety and research company Anthropic has updated its terms of service to prohibit the use of its powerful Claude AI models by companies that are majority-controlled by entities in China, Russia, Iran, and North Korea. Citing national security risks, the new policy aims to prevent advanced AI capabilities from being leveraged by authoritarian regimes for adversarial military and intelligence objectives. This decision marks one of the most direct actions taken by a major AI developer to restrict access to its technology based on national origin and corporate control, placing the firm at the vanguard of an industry grappling with its responsibilities in an era of technological rivalry.
The updated policy explicitly bars any organization, regardless of its operational headquarters, from using Anthropic's commercial services if more than 50 percent of its ownership is traced back to one of the four specified nations.[1] Anthropic stated that while direct access from these countries was already restricted due to legal and security risks, some groups were circumventing these measures by using subsidiaries incorporated in other nations.[1] The company warned that when these entities access its services, they could be compelled to develop applications that ultimately serve "adversarial military and intelligence services and broader authoritarian objectives."[1] This proactive step goes beyond typical compliance with government sanctions and export controls, representing a voluntary corporate action to address what Anthropic perceives as a critical security loophole. The company acknowledged the policy would result in a significant financial impact, with one executive telling the Financial Times the hit to revenue would be in the "low hundreds of millions of dollars," indicating that major Chinese technology firms like ByteDance, Tencent, and Alibaba could be affected.[2][3]
This decision by Anthropic is set against a backdrop of increasing U.S. government concern over the national security implications of advanced AI.[2][4] For years, Washington has been tightening export controls on the physical hardware essential for training large-scale AI models, such as high-performance semiconductors, in an effort to slow China's technological advancement.[5] Anthropic's ban on services represents a significant expansion of this battlefront into the software and access layer of the AI stack. The company's leadership has positioned the move as an alignment with the interests of democratic nations and a contribution to U.S. leadership in the field.[2] The concern is that sophisticated AI models like Claude could be used for a range of dual-use applications, from developing advanced weaponry and conducting cyberattacks to creating sophisticated propaganda and surveillance tools.[6][7] By implementing this restriction, Anthropic is effectively acting as a private gatekeeper of powerful technology, illustrating how leading AI firms are becoming influential geopolitical actors in their own right.[4]
Anthropic's firm stance on access distinguishes it within the competitive AI landscape and reinforces its public image as a company prioritizing safety and responsible development. Founded in 2021 by former OpenAI executives, Anthropic has consistently emphasized its commitment to building safe and steerable AI systems.[3] This ban can be seen as a direct application of that safety-first philosophy to the complex realm of international relations. While competitors like OpenAI also restrict access in China and other nations, Anthropic's specific targeting of majority-controlled subsidiaries is a more aggressive and comprehensive measure designed to close off indirect access routes.[3] The move sets a potential precedent for the industry, raising questions about whether other major U.S.-based AI providers will adopt similar policies as the capabilities of their models grow.[4] This could lead to a further splintering of the global AI ecosystem, creating a digital divide dictated not just by infrastructure, but by political ideology and national security alliances.
In conclusion, Anthropic's decision to bar companies controlled by entities from China, Russia, Iran, and North Korea is a watershed moment in the governance of artificial intelligence. It reflects a maturation of the industry, where private firms are now making proactive, high-stakes decisions based on national security and geopolitical considerations, rather than solely reacting to government mandates. While the long-term effectiveness and economic consequences of this self-imposed restriction remain to be seen, it unequivocally signals a new phase in the global tech rivalry. The move highlights the immense power concentrated within a few AI labs and forces a broader conversation about who should control access to transformative technologies and on what grounds, fundamentally shaping the trajectory of AI development and its impact on international security.