Google, xAI Embrace EU AI Rules, Solidifying Europe's Global Tech Influence

Major tech players align with EU's voluntary AI Code, but a fractured industry debates innovation, copyright, and global governance.

July 31, 2025

Google, xAI Embrace EU AI Rules, Solidifying Europe's Global Tech Influence
In a significant move that highlights the shifting landscape of artificial intelligence regulation, tech giants Google and xAI have announced their intention to sign the European Union's General Purpose AI Code of Practice.[1][2][3] This decision places them alongside other major players like Microsoft and OpenAI in a growing cohort of companies committing to the voluntary framework, which serves as a precursor to the legally binding EU AI Act.[4][5] The announcements come amid a fractured industry response, with some companies embracing the EU's approach while others, notably Meta, have refused to sign, sparking a wider debate on innovation, competition, and the future of AI governance.[6][7]
The General-Purpose AI (GPAI) Code of Practice is a voluntary set of guidelines designed to help AI model providers align with the forthcoming obligations of the EU AI Act, the world's first comprehensive legal framework for artificial intelligence.[8][9] Published on July 10, 2025, the Code was developed through a multi-stakeholder process involving independent experts, academics, and industry representatives.[8][10] It functions as a transitional tool, offering companies a way to demonstrate compliance before the AI Act's rules for general-purpose AI models become fully enforceable.[11][12] By signing, companies can benefit from greater legal certainty and a potentially reduced administrative burden, as the European Commission's enforcement will focus on adherence to the Code.[8][13] The Code is structured into three key chapters: Transparency, Copyright, and Safety and Security. The first two chapters apply to all providers of general-purpose AI models, while the third is specifically for providers of the most advanced models that are deemed to pose systemic risks.[8][14] The transparency chapter mandates clear documentation about a model's training and capabilities, the copyright chapter offers practical solutions for complying with EU copyright law, and the safety and security chapter outlines state-of-the-art practices for managing significant risks.[12][15]
The decision by major US-based model providers like Google, OpenAI, Anthropic, and Microsoft to sign the Code signals a pragmatic engagement with European regulators.[4][5] OpenAI stated that signing reflects its commitment to providing "capable, accessible, and secure AI models for Europeans."[16] Similarly, Microsoft confirmed its signature to "further build trust in Microsoft AI models" and support the European AI ecosystem.[5] Google's President of Global Affairs, Kent Walker, expressed hope that the Code will "promote European citizens' and businesses' access to secure, first-rate AI tools."[17] However, this cooperation is not without reservations. Walker also voiced concerns that certain provisions related to copyright, approval processes, and the potential exposure of trade secrets could "chill European model development and deployment, harming Europe's competitiveness."[7][3] Elon Musk's xAI has taken a more selective approach, stating it will sign the chapter on Safety and Security while criticizing other parts of the Code and the AI Act as being "profoundly detrimental to innovation" and calling the copyright provisions an "over-reach."[2][18][19]
The Code of Practice has created a clear divide within the tech industry.[20] Meta, the parent company of Facebook, has publicly refused to sign, with its Chief Global Affairs Officer calling the code an "overreach" and stating that Europe is "heading down the wrong path on AI."[6][21] Meta argues the code introduces legal uncertainties and goes beyond the scope of the AI Act itself.[7][18] This sentiment is echoed by some European corporations who have petitioned for a two-year delay in the AI Act's full implementation, fearing that unclear guidelines could harm innovation.[20][7] Beyond the tech sector, the Code has faced severe criticism from creative industries. A coalition representing millions of creators, publishers, and performers has labeled the implementation a "betrayal" of the AI Act's original intent.[22] They argue that their feedback was largely ignored and that the final Code fails to provide meaningful protection for intellectual property rights against widespread data scraping by AI models.[22][23] This group, which includes organizations like CISAC and IFPI, contends that the measures do not strike a fair balance and primarily benefit the very AI companies whose models are built by infringing on copyright.[22]
The EU's regulatory efforts are poised to set a global benchmark for AI governance, a phenomenon known as the "Brussels effect."[20][21] The AI Act's risk-based approach, which imposes stricter rules on systems deemed to have higher risks, is the first of its kind and is being closely watched worldwide.[9] However, the implementation process has been fraught with tension, highlighting the structural power imbalances between large tech corporations and civil society.[24] Critics from civil society organizations argue that corporate interests largely prevailed in the drafting of the Code, and that the process disadvantaged groups with fewer resources.[24][10] The debate encapsulates the central challenge facing regulators globally: how to foster the immense economic potential of AI—estimated to be a €1.4 trillion annual boost to the EU's economy by 2034—while simultaneously erecting guardrails to protect fundamental rights, ensure safety, and prevent societal harm.[7][24]
In conclusion, the decisions by Google and xAI to adhere to the EU's AI Code of Practice, in whole or in part, mark a pivotal moment in the global conversation on AI regulation. It reflects an acknowledgment of the EU's regulatory power, even as significant concerns about the impact on innovation persist within the industry. The starkly different stances of major players like Google and Meta, coupled with fierce opposition from the creative sector, underscore the complex and contentious nature of crafting rules for this transformative technology. As the world moves toward the implementation of the binding AI Act, the effectiveness of this voluntary Code and the ongoing dialogue it provokes will be critical in shaping an AI ecosystem that is not only innovative but also trustworthy and aligned with democratic values.[24][25]

Share this article