EU Sets Global AI Standard, Forces AI 'Black Boxes' Open
Europe's new AI Act and code usher in a global standard, forcing developers to open 'black boxes' for transparent and accountable AI.
July 11, 2025

The European Union is setting a new global standard for artificial intelligence oversight, with the recent finalization of a voluntary code of practice that signals a significant shift in how AI developers will be required to justify their technology. With the landmark EU AI Act's rules for general-purpose AI (GPAI) models slated to take effect in August 2025, the bloc has introduced a detailed "Model Documentation Form" that requires a level of transparency from AI providers reminiscent of exhaustive financial disclosures.[1][2] This comprehensive framework, developed after a multi-stakeholder process involving nearly 1,000 participants, aims to prepare the industry for the world's first major set of legally binding AI regulations.[3][1] While the code is voluntary, adhering to it is positioned as a streamlined path to compliance, offering "reduced administrative burden and increased legal certainty" for those who sign on.[4][5]
The new code of practice is structured into three main chapters addressing transparency, copyright, and safety.[3][6] For all providers of general-purpose AI models, the transparency and copyright sections are pertinent.[6] The transparency chapter, in particular, introduces the "Model Documentation Form," a tool designed to help companies fulfill their documentation duties under the AI Act.[7][6] This form requires providers to disclose a wide array of information, including the model's architecture, the number of parameters, its intended tasks, acceptable use policies, and the modalities of its inputs and outputs.[8][9] Companies must also provide detailed descriptions of the training process, the data used for training, testing, and validation, and the computational resources consumed, including the model's known or estimated energy consumption.[8][9] This level of required detail aims to give downstream providers who integrate these models into their own systems the necessary information to comply with their own obligations under the AI Act.[7][10]
A significant portion of the new framework is dedicated to models identified as having "systemic risk."[11] These are typically the most advanced and powerful AI systems, defined by the AI Act as those trained using a computational budget greater than 10^25 floating-point operations (FLOPs).[11][8] The Safety and Security chapter of the code outlines specific, more stringent requirements for these high-risk models.[6] Providers of such models must establish a comprehensive safety and security framework, which includes identifying, analyzing, and mitigating systemic risks.[11][12] They are also required to conduct adversarial testing, often referred to as red-teaming, to probe for vulnerabilities and potential misuse, such as the ability to generate harmful content or spread disinformation.[1][9] Furthermore, these providers must report serious incidents to the newly established EU AI Office and allocate clear responsibility for risk management within their organizations.[11][12] The intention is to ensure that the most capable AI models undergo rigorous scrutiny before and during their deployment in the European market.[13]
The rollout of the AI Act and its accompanying code of practice is not without its complexities and controversies. The regulations apply to any provider placing a general-purpose AI model on the EU market, irrespective of where the provider is located.[7] This extraterritorial reach is similar to the EU's General Data Protection Regulation (GDPR) and positions the AI Act to become a global benchmark.[14] However, the phased implementation has caused some confusion.[15] While the rules for GPAI models take effect in August 2025, enforcement by the AI Office will begin a year later for new models and two years later for existing ones.[3][5] Despite calls from some in the tech industry to delay implementation due to concerns about compliance costs and complexity, the European Commission has firmly stated that the timeline will not be paused.[16][17] The process of drafting the code itself was fraught with debate, with industry groups arguing for less restrictive rules and civil society advocates pushing for stronger safeguards.[4]
In conclusion, the EU's comprehensive approach to regulating AI through the AI Act and the new code of practice represents a pivotal moment for the technology. By requiring deep transparency through mechanisms like the Model Documentation Form, the EU is forcing AI providers to open up their "black boxes" and be accountable for how their systems are built and operate.[18][19] The voluntary nature of the code provides a collaborative route to compliance, encouraging early adoption of best practices through initiatives like the AI Pact.[20][21][22] However, the stringent requirements, particularly for high-risk systems, and the unyielding implementation timeline present significant challenges for the industry.[14][17] As the world watches, the success of the EU's ambitious regulatory experiment will depend on its ability to foster trustworthy innovation while avoiding the creation of a framework so rigid that it stifles the very technological advancement it seeks to govern.[14][23]
Sources
[2]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
[19]
[21]
[22]
[23]