Google Endorses EU AI Code, Warns Against Stifling Innovation
Tech giant embraces EU's voluntary AI code, highlighting the ongoing tension between regulation and innovation.
July 30, 2025

In a significant move that underscores the complex dance between technological innovation and regulation, Google has announced its intention to sign the European Union's General Purpose AI Code of Practice. The decision aligns the tech giant with other major artificial intelligence developers, including OpenAI and likely Microsoft, in a voluntary commitment to a set of principles designed to guide the industry toward the impending mandates of the world's first comprehensive AI legislation, the EU AI Act.[1][2][3][4] While Google's endorsement lends considerable weight to the EU's regulatory efforts, it comes with a strong note of caution, reflecting an industry-wide apprehension about the potential for such rules to stifle growth and competitiveness in the rapidly evolving field of artificial intelligence.[1][5]
The General-Purpose AI (GPAI) Code of Practice is a voluntary framework published in July 2025, crafted by independent experts following extensive stakeholder consultation.[2][6][7][8] It is designed to act as a bridge, helping companies prepare for and demonstrate compliance with the legally binding EU AI Act, which entered into force in August 2024 and will see its obligations phased in over the next two years.[9][10][8][11] By adhering to the Code, which is expected to be formally endorsed by the Commission, companies can gain a degree of legal certainty and potentially face reduced administrative burdens and fewer regulatory inspections.[6][4] The Code is structured around three core pillars. The first, Transparency, requires providers to create and maintain detailed documentation about their AI models, including the training process, and to make this information available to downstream providers and regulators.[2][6][7] The second pillar focuses on Copyright, offering practical guidance for companies to ensure they have policies in place to respect EU copyright law when sourcing training data.[2][6] The third and final pillar, Safety and Security, outlines state-of-the-art practices for managing systemic risks associated with the most advanced and powerful AI models, a requirement that applies only to a select few systems deemed to pose a significant societal threat.[6][7]
In announcing the decision, Google's President of Global Affairs, Kent Walker, framed it as a move to support European access to "secure, first-rate AI tools," highlighting the potential for AI to boost the continent's economy by an estimated €1.4 trillion annually by 2034.[1][12][13] This qualified endorsement, however, was accompanied by explicitly stated reservations. Walker warned that both the AI Act and the Code of Practice "risk slowing Europe's development and deployment of AI."[1][2][3] The company's primary concerns center on three specific areas: potential departures from established EU copyright law, administrative hurdles that could slow down approvals for new technologies, and transparency requirements that might force the exposure of valuable trade secrets.[1][5][14] These concerns echo a broader sentiment within the tech industry that overly prescriptive regulations could "chill" innovation and harm Europe's global competitiveness. Google has committed to working with the EU's newly formed AI Office to ensure the code's application is "proportionate and responsive" to the dynamic nature of AI development.[1][5]
Google's decision places it alongside other key players in the AI field, such as OpenAI, Anthropic, and French startup Mistral, who have already committed to the Code.[3][4] Microsoft has also indicated it is likely to sign on.[3][14] This growing coalition of signatories adds significant momentum to the EU's approach. However, the industry is not monolithic in its acceptance. Meta, the parent company of Facebook and Instagram, has publicly declined to sign the Code.[2][3][14] Meta's chief global affairs officer argued that the voluntary rules introduce "legal uncertainties" and extend "far beyond the scope of the AI Act," which he claimed would ultimately throttle AI development in Europe.[14] This division highlights a fundamental schism in the tech world over the best path forward for AI governance, pitting those who see collaborative, voluntary frameworks as a constructive way to shape future regulation against those who fear it as regulatory overreach by another name.[15]
The Code of Practice is part of a broader, multi-pronged strategy by the European Union to establish itself as a global leader in AI regulation.[11] This strategy includes the AI Act itself, which takes a risk-based approach to regulation, imposing stricter rules on systems deemed to pose a higher risk to safety or fundamental rights.[15][16] Alongside the Act and the Code, the Commission has launched the AI Pact, another voluntary initiative that encourages companies from all sectors to begin implementing key principles of the AI Act early on.[17][18][9][19] These core commitments under the AI Pact include developing an internal AI governance strategy, mapping systems that are likely to be classified as high-risk, and promoting AI literacy among staff.[18][20][9][21] Over 100 companies, including major tech firms, telecoms, and manufacturers, have signed onto the AI Pact, signaling broad industry engagement with the EU's legislative direction.[17][18][20] Through these overlapping initiatives, Brussels aims to foster a collaborative environment, sharing best practices and preparing industry for the legal realities to come, while simultaneously promoting its vision of a "human-centric" and trustworthy AI ecosystem.[9][22][16]
In conclusion, Google's decision to sign the EU's General Purpose AI Code of Practice represents a critical, if carefully worded, vote of confidence in Europe's ambitious regulatory project. It demonstrates a willingness among leading technology developers to engage constructively with policymakers to establish guardrails for a transformative technology. Yet, the persistent concerns voiced by Google and the outright refusal to participate by others like Meta reveal a deep-seated tension that will define the next chapter of AI development.[5][14] The central challenge remains balancing the urgent need for safety, transparency, and accountability with the desire to foster the rapid innovation that promises significant economic and societal benefits.[23] As the provisions of the AI Act begin to take effect, the world will be watching to see if the European model can successfully navigate this complex terrain, setting a global standard for responsible AI without grounding the technological race before it has truly taken flight.[4]
Sources
[1]
[3]
[5]
[6]
[9]
[10]
[11]
[12]
[13]
[14]
[16]
[18]
[19]
[21]
[22]
[23]