OpenAI grants EU regulators model access while Anthropic blocks inspection of its Mythos system
OpenAI’s transparency and Anthropic’s secrecy test the EU’s power to regulate frontier models under the new AI Act
May 11, 2026

The implementation of the European Union’s landmark AI Act is entering a critical and legally complex phase as regulators in Brussels attempt to move from high-level policy to hands-on enforcement. While the legislative framework for governing artificial intelligence was designed to set a global gold standard for safety and transparency, its effectiveness now rests on a more practical challenge: gaining access to the black-box systems of the world’s leading AI laboratories. In recent weeks, this struggle has been personified by the divergent approaches of OpenAI and Anthropic.[1][2] While the former has proactively invited European regulators to inspect its latest cybersecurity-focused model, the latter has remained notably guarded, highlighting a persistent vulnerability in the EU's regulatory architecture.[1] The situation underscores an uncomfortable reality for the European Commission’s newly established AI Office: even with the law on their side, regulators are currently dependent on the voluntary cooperation of the very companies they are tasked with overseeing.[1]
The divide became public following a series of briefings by the European Commission, which revealed that OpenAI has offered direct access to its new GPT-5.5 Cyber model for an intensive security review. This offer is part of what the company calls its EU Cyber Action Plan, an initiative led by George Osborne, the former British finance minister who now heads OpenAI’s global diplomatic outreach. By granting the Commission and its cybersecurity agency, ENISA, the ability to test the model’s capabilities in a controlled environment, OpenAI is positioning itself as a transparent partner rather than a defensive adversary. The GPT-5.5 Cyber model is specifically designed to be more permissive for authorized security workflows, such as malware analysis and penetration testing, allowing vetted defenders to use the system for defensive research without triggering the safety filters that constrain the public version of the model.[3] This proactive engagement is seen as a strategic move by OpenAI to influence the technical standards and "Codes of Practice" that will eventually define how systemic risk is measured under the AI Act.
In contrast, the European Commission’s attempts to secure similar access to Anthropic’s state-of-the-art model, codenamed Mythos, have reached a stalemate.[2][4][1][5] Despite four to five high-level meetings between Anthropic executives and EU officials, the Commission has yet to be granted any form of hands-on access to the model.[1][6] Mythos has become a lightning rod for regulatory concern due to its unprecedented ability to identify and exploit zero-day software vulnerabilities.[7] Anthropic’s own research suggests the model can find high-severity flaws across major operating systems and web browsers at a depth and speed that surpasses most human researchers. Citing these extreme capabilities, Anthropic has severely restricted access to the model through its Project Glasswing, an initiative that provides early preview access only to a handful of trusted American tech giants and the United Kingdom’s AI Security Institute.[8] This exclusionary approach has left European regulators, including those from the European Central Bank and major cybersecurity agencies, effectively frozen out of the evaluation process for a tool that could theoretically destabilize European financial and critical infrastructure.
The disparity in access between Brussels and London has also injected a geopolitical dimension into the debate. While the UK’s AI Security Institute was permitted to perform red-teaming on the Mythos model, the European Union—a far larger market with a much more comprehensive regulatory framework—finds itself still knocking at the door. This gap has raised pointed questions about digital sovereignty and the "Brussels Effect." If the EU cannot gain access to the frontier models being developed in Silicon Valley, its ability to mitigate "systemic risks" as mandated by Article 55 of the AI Act is severely compromised. The law requires providers of general-purpose AI models with systemic risk to conduct adversarial testing and report on their mitigation strategies, but without the technical means to verify these claims independently, the AI Office risks becoming a registry of self-reported data rather than a true oversight body.
Furthermore, the industry’s resistance to transparency is rooted in a deep-seated fear of exposing trade secrets and losing competitive advantages. For companies like Anthropic, the Mythos model represents a significant leap in "agentic" capabilities—the ability for an AI to reason through multi-step tasks and interact with external software environments. These are the crown jewels of modern AI development, and the risk of intellectual property theft or the exposure of sensitive model weights is a primary concern. However, European officials argue that the public safety risks associated with these models are too high to be left entirely to internal corporate reviews. Dutch lawmaker Kim van Sparrentak and other members of the European Parliament have characterized Anthropic’s absence from recent security hearings as "extremely worrying," suggesting that the current reliance on corporate goodwill is a structural flaw that may require further legislative or executive intervention.
The consequences of this impasse extend far beyond the immediate technical reviews of two specific models. If the EU fails to establish a reliable pipeline for model access, it could set a precedent where frontier labs choose to bypass the European market for their most advanced "pro" or "cyber" variants. Already, the White House has considered mandatory pre-release reviews for certain models, and the lack of a synchronized international standard for model inspection is creating a fragmented landscape for global AI governance. For the AI industry, the current friction in Brussels serves as a warning: the era of voluntary safety pledges is rapidly giving way to a period of mandatory verification. Companies that refuse to provide regulators with "the keys to the door" may find themselves facing not only multi-billion euro fines under the AI Act but also potential bans on the deployment of their most capable systems within the European market.
Ultimately, the success of the EU’s regulatory ambitions depends on building a bridge between proprietary interests and the public’s right to safety. The AI Office is currently in a race to develop its own technical capacity, including the hiring of specialized red-teaming experts and the procurement of the massive compute resources needed to run independent evaluations. However, as long as the underlying model architectures remain behind closed doors, the regulators are essentially flying blind. The coming months will be a defining period for the AI Act as the August 2026 deadline for full compliance approaches. Whether other companies follow OpenAI’s lead in proactive transparency or Anthropic’s model of cautious restriction will determine if the European Union can truly hold the world’s most powerful technologies accountable or if the frontier of AI will remain a territory beyond the reach of the law.