Anthropic Cuts OpenAI's Claude Access, Citing Rival AI Training

Anthropic accuses OpenAI of IP theft, blocking Claude access amidst GPT-5 anticipation in escalating AI war.

August 2, 2025

Anthropic Cuts OpenAI's Claude Access, Citing Rival AI Training
In a significant escalation of tensions between two of the world's leading artificial intelligence firms, Anthropic has blocked OpenAI's access to its Claude series of AI models. The move, which Anthropic attributes to a breach of its terms of service, comes just as OpenAI is widely rumored to be preparing for the launch of its highly anticipated GPT-5 model. This decision highlights the increasingly competitive and fraught landscape of AI development, where the lines between collaboration, benchmarking, and competitive encroachment are becoming ever more blurred.
The core of the dispute lies in Anthropic's allegation that OpenAI was using its Claude models, particularly the highly regarded Claude Code, to develop and refine its own competing products.[1][2][3] Anthropic's commercial terms of service explicitly prohibit using its tools to build or train rival AI models.[4][5] According to reports, OpenAI's technical staff were leveraging Claude's API, not through the standard chat interface, but via special developer access integrated into their internal workflows.[6][4] This allowed them to conduct extensive testing, comparing Claude's performance against their own models in areas like coding, creative writing, and, crucially, safety responses to sensitive prompts.[2][6][4] Anthropic claims this use crossed the line from standard industry benchmarking into a direct violation of their contractual agreement.[2] An Anthropic spokesperson stated that while Claude Code has become a popular tool for coders globally, its use by OpenAI's team constituted a clear breach of their service agreement, which forbids using their tools to create rival products.[1]
In response to the block, OpenAI has expressed disappointment, framing its actions as standard industry practice for evaluating competitor systems to advance the field and improve safety protocols.[1][4] OpenAI's chief communications officer, Hannah Wong, acknowledged the use of Claude for benchmarking but stressed that it is a common and necessary process for gauging progress and enhancing safety measures.[1][4] Wong also pointed out the irony that OpenAI's own API remains accessible to Anthropic.[1][4] Despite the cutoff, Anthropic has indicated it will continue to permit OpenAI access for the specific purposes of benchmarking and safety evaluations, though it has not clarified how this will be managed given the current restriction.[2][6] This move has been interpreted by some as a strategic public relations maneuver by Anthropic, positioning Claude as a superior model that even its chief competitor relies on.[7]
This is not the first time Anthropic has taken defensive measures to protect its proprietary technology. Just a month prior to blocking OpenAI, the company limited access for the AI coding startup Windsurf amid rumors of a potential acquisition by OpenAI, a deal that ultimately did not happen.[1][6] Anthropic's chief science officer had stated at the time that selling Claude to OpenAI wouldn't make sense, underscoring their commitment to keeping the model independent.[1] The recent action against OpenAI also follows Anthropic's implementation of stricter usage limits on its Claude Code service, citing a surge in demand and violations of its terms of service.[2] This series of events suggests a broader strategy to safeguard its intellectual property as the AI arms race intensifies. The practice of revoking API access from competitors is a well-established, if contentious, tactic in the technology sector, with historical precedents involving major players like Facebook and Salesforce.[1][2][6]
The implications of this development are significant for the AI industry, raising critical questions about the balance between competition and collaboration. While benchmarking is widely accepted as necessary for progress, the use of a competitor's live model for development purposes pushes the boundaries of ethical and legal conduct. The dispute also underscores the growing importance of enterprise adoption and the different business strategies being pursued by the two companies. While OpenAI has a strong consumer-facing subscription model, Anthropic has focused heavily on enterprise clients and API access, a strategy that has seen it gain significant market share in the enterprise sector.[8][9][10] This conflict could further accelerate the trend of companies building more "walled gardens" to protect their innovations, potentially slowing the collaborative spirit that has fueled much of the recent progress in AI. As OpenAI prepares to launch GPT-5, the loss of access to Claude as a benchmark and development tool could have unforeseen impacts on its final capabilities, particularly in the increasingly vital domain of coding. The incident serves as a stark reminder that as AI models become more powerful and commercially valuable, the competitive landscape will only become more complex and litigious.

Sources
Share this article