Europe establishes the first global technical standard for mandatory AI security
ETSI EN 304 223 provides the mandatory technical blueprint for securing AI systems and achieving EU AI Act compliance.
January 15, 2026

The European Telecommunications Standards Institute, or ETSI, has initiated a new era of global AI governance with the introduction of ETSI EN 304 223, a foundational standard that establishes the first globally applicable European Standard for AI cybersecurity.[1] This move is a critical response to the growing operational reliance on machine learning, providing concrete, universally recognized provisions that enterprises must now integrate into their core governance and security frameworks.[1] Developed by the ETSI Technical Committee Securing Artificial Intelligence (SAI), the standard’s formal adoption by National Standards Organisations solidifies its authority and immediate relevance across international markets.[1][2] By codifying a whole-lifecycle approach to AI security, the standard moves beyond abstract principles to offer a pragmatic and testable baseline for all actors in the AI supply chain, from developers and vendors to integrators and system operators.[3][2][4] The comprehensive nature of the standard, titled “Securing Artificial Intelligence (SAI); Baseline Cyber Security Requirements for AI Models and Systems,” explicitly covers advanced technologies, including deep neural networks and generative AI, ensuring that the entire spectrum of modern AI deployment is addressed.[3][2]
The impetus for this focused standard arises from the unique and evolving vulnerabilities inherent to AI systems that traditional software security measures fail to fully address.[1] The document specifies numerous AI-specific threats that enterprises must actively mitigate, such as data poisoning, which involves manipulating training data to compromise a model’s integrity; model obfuscation; and indirect prompt injection attacks, which can turn large language models against their intended function.[3][5] Beyond these external threats, the standard confronts operational risks, including complex data management practices and the need for audit trails covering the entire lifecycle management of models, datasets, and prompts.[5][4] The framework meticulously structures its requirements around 13 core principles, which are further expanded into 72 trackable provisions across five distinct phases of the AI lifecycle: secure design, secure development, secure deployment, secure maintenance, and secure end of life.[3][4] This lifecycle-centric model ensures that security is not a bolted-on afterthought but an integral component of the system from its inception, necessitating a fundamental shift in how AI products are engineered and maintained in the enterprise environment.[6][4]
A key element of ETSI EN 304 223 is its clarity in assigning security ownership across the typically complex AI supply chain, a perpetual hurdle in enterprise AI adoption.[1] The standard formally defines three primary technical roles—Developers, System Operators, and Data Custodians—and allocates concrete ‘shall’ and ‘should’ provisions to each.[3][5] Developers, who create or adapt the AI model, must focus on secure coding, due diligence on external components, and documenting model provenance.[5] System Operators, who embed and maintain the AI system within their infrastructure, face obligations related to secure deployment, continuous monitoring, and proactive security update management.[5] The explicit introduction of the Data Custodian, the entity responsible for controlling data permissions and integrity, is particularly notable, bringing Chief Data and Analytics Officers (CDAOs) directly into the cybersecurity and compliance conversation with strict obligations for protecting sensitive training and operational data.[1][6] For an increasing number of enterprises, where the organization both fine-tunes an open-source model and operates it, this dual status triggers the most stringent, combined obligations for both technical security and comprehensive documentation.[1]
The standard is poised to become a vital technical companion to the European Union’s broader regulatory landscape, most notably the EU AI Act.[1] While the AI Act sets the high-level legal and ethical requirements, especially for high-risk systems, the ETSI standard provides the practical, actionable technical specifications necessary to achieve compliance.[3][6] The document itself explicitly maps its provisions to related legal and technical frameworks, including the EU AI Act and the NIST AI Risk Management Framework, positioning itself as the critical bridge between legislative mandate and technical implementation.[3][6] This alignment with international best practices—including ISO/IEC 27001 for information security—facilitates its global applicability and enhances its credibility as an international benchmark.[6][4] For companies deploying high-risk AI, demonstrating compliance with ETSI EN 304 223 will likely serve as a crucial, documented evidence of having met the AI Act’s stringent security and risk management criteria. To ease the burden of adoption, particularly for Small and Medium-sized Enterprises (SMEs), ETSI has committed to releasing an implementation guide featuring practical case studies and deployment examples, an essential resource for operationalizing the standard’s 72 requirements effectively.[4] The comprehensive, technical, and internationally backed nature of ETSI EN 304 223 solidifies its role as the de facto playbook for securing AI in a rapidly digitizing world, signaling a maturation of the AI industry that prioritizes resilience and trustworthiness.[4]