Meta Rejects EU AI Code, Igniting Major Regulatory Standoff
Meta rejects voluntary EU AI code, sparking a regulatory battle and deepening the transatlantic innovation versus control debate.
July 18, 2025

In a significant move that underscores the growing tensions between Big Tech and European regulators, Meta Platforms has announced it will not sign the European Union's voluntary Code of Practice for artificial intelligence. The social media giant, parent company to Facebook, Instagram, and WhatsApp, cited major legal uncertainties and claimed the code imposes requirements that extend far beyond the scope of the EU's landmark AI Act.[1][2][3] This decision signals a potential fracture in the unified approach to AI governance that Brussels has been striving to build, setting the stage for a complex regulatory battle.
Meta's refusal, articulated by its Chief Global Affairs Officer, Joel Kaplan, centers on the belief that the EU is "heading down the wrong path on AI."[4][2] The company's primary objections target several core provisions within the code.[4] These include stringent requirements for continuous and detailed documentation updates for AI systems, a prohibition on training models with pirated content, and the mandatory observance of opt-out requests from content owners whose data might be used for training.[4][2] Meta argues that such measures will "throttle the development and deployment of frontier AI models in Europe" and hinder European companies that aim to build on these technologies.[4][1][5] This stance is not isolated; Meta's announcement echoes concerns from a coalition of over 40 major European companies, including industry heavyweights like Bosch, Siemens, and Airbus, who had previously called for a pause in the AI Act's implementation.[1][2][3]
The AI Code of Practice was developed as a voluntary, non-binding instrument to bridge the gap until the legally binding AI Act becomes fully enforceable.[6][7] The AI Act's obligations for providers of general-purpose AI (GPAI) models are set to take effect in August 2025, but the development of official harmonized standards could take until 2027 or later.[6] The Code of Practice, drafted through a multi-stakeholder process, was intended to offer companies a clear and less burdensome pathway to demonstrate compliance with the upcoming law.[6][8] By adhering to the code, which covers transparency, copyright, and safety, companies could benefit from greater legal certainty and reduced administrative overhead.[8][9][10] Those who opt out, like Meta, must find their own way to prove compliance and may face more intense regulatory scrutiny from the EU's newly formed AI Office.[1][5] The code is structured into three main chapters: Transparency and Copyright, which apply to all GPAI model providers, and a Safety and Security section specifically for providers of the most powerful models deemed to have systemic risk.[6][8]
The implications of Meta's decision are far-reaching, potentially creating a divided landscape for AI regulation in Europe. While the code is voluntary, its rejection by a major player like Meta could undermine its authority and effectiveness as a benchmark for the industry.[1] The European Commission has maintained that companies choosing not to sign will be subject to closer monitoring to ensure they meet the AI Act's legal requirements.[1] This sets up a scenario where different companies may follow different compliance paths, leading to a fragmented and less predictable regulatory environment. The move also highlights a transatlantic rift in regulatory philosophy, with some in Silicon Valley viewing Europe's approach as overly restrictive and a threat to innovation.[4] This contrasts with the position of other major AI developers, such as OpenAI and the French company Mistral AI, who have both pledged to sign the code.[1][5]
As the deadline for the AI Act's initial provisions approaches, the standoff between Meta and the EU intensifies the global debate on how to best govern powerful AI technologies.[11] The Commission's stance is that the code provides a "solid benchmark" and a predictable path to compliance, while Meta and its allies argue it represents regulatory overreach that could stifle Europe's technological competitiveness.[1][12] The path forward remains uncertain. While companies that choose not to sign the voluntary code are not in immediate breach of any law, they will have to independently demonstrate their adherence to the AI Act's binding obligations when they come into force.[5] For existing models, companies have until August 2027 to comply.[3][13] The dispute underscores the fundamental challenge facing policymakers worldwide: how to foster innovation in a rapidly evolving field while simultaneously erecting guardrails to mitigate potential harms and ensure fundamental rights are protected.[11] Meta's public refusal to sign the EU's code has drawn a clear line in the sand, and the entire AI industry is now watching to see how Brussels will respond.[1]
Sources
[1]
[4]
[5]
[7]
[8]
[9]
[10]
[11]
[12]
[13]