Europe's AI Act Stifles Innovation, Risks Falling Behind China
Brussels' pioneering AI regulation struggles to balance ethics and innovation, creating a crucial gap as US and China surge.
October 1, 2025

A sense of urgency is palpable in Brussels, not just within the corridors of the European Commission, but also in the tech boardrooms that see Europe at a critical juncture in the global artificial intelligence race. Google’s President of Global Affairs, Kent Walker, recently articulated this concern at the Competitive Europe Summit, cautioning that the European Union is lagging behind competitors, most notably China, in AI adoption.[1] Walker's remarks highlight a growing apprehension that while the EU has pioneered comprehensive AI regulation, its risk-averse approach might be inadvertently stifling the very innovation it needs to compete on the world stage, creating a gap that China is aggressively filling.
The data paints a stark picture of Europe's position in the global AI landscape. While the EU has been a leader in setting regulatory standards with its landmark AI Act, it trails significantly in both private investment and corporate adoption compared to the United States and China.[2][3][4] Between 2018 and the third quarter of 2023, EU companies attracted approximately €32.5 billion in AI investment, a figure dwarfed by the more than €120 billion invested in their US counterparts.[2][5][3] The disparity is even more pronounced when looking at private investment in 2023 alone, where the US led with €62.5 billion, followed by China at €7.3 billion, with the EU and UK combined attracting €9 billion.[5] This investment gap is widening, particularly with substantial US funding flowing into major players like OpenAI and Anthropic.[2][5] The consequences are tangible; in 2023, US institutions produced 61 notable AI models, far outpacing the EU's 21 and China's 15.[6] A 2023 McKinsey survey further underscores the adoption deficit, finding that 40% of North American companies had adopted generative AI in at least one business function, compared to just 30% in Europe.[7][8] This slower uptake could have significant economic repercussions, potentially limiting Europe's ability to capitalize on the massive productivity gains AI promises, which some estimates place at an additional €600 billion in gross value added to the European economy by 2030 if adoption accelerates.[9]
In stark contrast to the EU's regulatory-focused approach, China has implemented a comprehensive, state-driven strategy to become the world leader in AI by 2030.[10][11][12] Beijing's "New Generation Artificial Intelligence Development Plan," launched in 2017, outlines a multi-pronged approach that leverages public-private partnerships, substantial government funding, and the cultivation of a massive domestic talent pool.[10][13] China's strategy is market-dominant, aiming to accelerate the commercialization of AI technologies to gain a competitive advantage.[10] The government actively steers private-sector development through massive investment vehicles, with over 2,100 "government guidance funds" established by 2022 with a target size of $1.86 trillion to seed AI firms.[13] This national strategy includes ambitious targets for mass adoption, aiming for 90% of the population to be using AI by 2030.[14] The results of this concerted effort are evident, with China rapidly closing the quality gap on top-tier AI models and leading the world in AI patent applications and scientific publications on the topic.[15][16][17][18]
At the heart of the debate within Europe is the AI Act, the world's first comprehensive law on artificial intelligence.[19] The legislation adopts a risk-based framework, imposing stricter rules on AI systems deemed "high-risk."[20][21][22] While lauded for its focus on creating a trustworthy and human-centric AI ecosystem, the Act has also drawn considerable criticism from industry players and even some member states.[20][23][24] Concerns have been raised that its broad definition of AI, complexity, and potential for overregulation could stifle innovation, particularly for startups and SMEs who may struggle with the high costs of compliance.[20][9][23] Critics argue that the legislation, while well-intentioned, could create significant barriers to entry and slow down the development and deployment of new AI technologies in the bloc.[24] Walker and other industry leaders have advocated for a "smarter" regulatory approach, one that balances safety and ethics with the need to foster a pro-innovation environment.[1] The call is not for deregulation, but for a more flexible framework that holds actors accountable without micromanaging technological progress.[24]
The path forward for Europe involves a delicate balancing act. The continent's leaders must navigate between upholding their commitment to ethical, rights-respecting AI and creating an environment where businesses can innovate and scale rapidly. Failing to accelerate AI adoption could mean missing out on a projected productivity growth of up to 3% annually through 2030 and ceding technological leadership to global rivals.[7][25][8] Initiatives like the EU's "InvestAI" program, which aims to mobilize €200 billion, signal an awareness of the investment gap, but sustained commitment and strategic execution are critical.[2][26] As Google's Kent Walker implied in Brussels, the challenge is not just about writing the rules of the road for AI, but also about ensuring Europe has a fleet of competitive vehicles in the race. The decisions made now will determine whether the EU becomes a leading player in the AI-powered global economy or a highly regulated, yet technologically lagging, bystander.
Sources
[5]
[7]
[10]
[12]
[13]
[14]
[16]
[17]
[18]
[19]
[21]
[22]
[23]
[24]
[26]