European Union delays landmark AI Act deadlines to boost competitiveness and resolve technical bottlenecks
Brussels delays high-risk AI rules until 2028 to protect industry growth while maintaining urgent bans on harmful content
May 7, 2026

The European Union has entered a pivotal phase in its attempt to govern the rapidly evolving field of artificial intelligence by opting for a significant recalibration of its regulatory timeline.[1][2][3][4] Faced with the immense technical complexity of its own landmark AI Act and mounting pressure to maintain regional competitiveness, EU negotiators have reached a political agreement on a legislative package known as the Digital Omnibus on AI. This new agreement effectively shifts the goalposts for the most stringent requirements of the law, pushing back compliance deadlines for high-risk AI systems from the summer of 2026 to late 2027 and 2028.[5][2][6][3][4] While the move is framed by the European Commission as a step toward a more innovation-friendly environment, it signals a pragmatic admission that the infrastructure required to enforce such a comprehensive legal framework is not yet ready for a broad rollout.
The core of this regulatory shift lies in the deferral of obligations for high-risk AI systems, which are applications deemed to have significant potential to impact health, safety, or fundamental rights. Under the newly agreed terms, standalone high-risk systems—such as those used in biometrics, critical infrastructure, education, employment, and law enforcement—will now see their compliance deadline moved to December 2, 2027.[4] For AI systems embedded as safety components in regulated products like medical devices, toys, and industrial machinery, the implementation window has been extended even further to August 2, 2028. This delay was necessitated by a bottleneck in the development of the technical standards and harmonized specifications that businesses need to demonstrate compliance. Without these "blueprints" for safety and accuracy, which are currently being drafted by standard-setting bodies like CEN-CENELEC, companies and regulators alike faced a "regulatory cliff" in August 2026 that threatened to paralyze the market.
While the EU is easing the timeline for most high-risk applications, it has simultaneously moved to tighten prohibitions on specific, highly harmful uses of the technology.[7] The Digital Omnibus on AI introduces an explicit and rigorous ban on "nudification" apps—AI systems designed to generate non-consensual sexually explicit content or child sexual abuse material.[8][7][4] This provision targets tools that can undress individuals in images or manipulate their likeness into intimate scenarios without consent, a practice that has surged in recent years. This ban is set to take effect on a shorter timeline than the high-risk requirements, with companies expected to bring their systems into compliance by December 2, 2026.[9][4] This targeted enforcement reflects a political priority within the European Parliament to protect human dignity and combat the weaponization of generative AI against women and children, even as broader industrial rules are delayed.
The transparency obligations of the original AI Act remain largely on track, standing as the primary exception to the wave of delays. The requirement for companies to label deepfakes and identify AI-generated text is still slated to take effect on August 2, 2026.[10] This means that providers of generative AI must ensure their outputs are marked in a machine-readable format and are detectable as artificially created or manipulated.[11][12][13][14] However, there is a notable nuance in the implementation of these rules; the requirement for labeling AI-generated text is expected to apply primarily to content that is produced in a fully automated manner without substantial human editorial oversight.[10] This distinction is intended to prevent the over-regulation of professional tools used by journalists and creative writers while still addressing the risks of large-scale, AI-driven disinformation campaigns.
A major driver behind the simplification effort is the desire to protect the economic viability of Europe’s small and medium-sized enterprises (SMEs). The Digital Omnibus extends significant relief to businesses with up to 750 employees and a turnover of up to 150 million euros, a definition that now includes "small mid-caps." These companies will benefit from reduced registration and documentation burdens, as well as prioritized access to regulatory sandboxes.[10] These sandboxes are controlled environments where developers can test their AI systems under the supervision of regulators before bringing them to market. By lowering the "cost of compliance," EU policymakers hope to prevent the AI Act from inadvertently becoming a barrier that drives startups out of the European market and into less regulated jurisdictions like the United States or China.
However, the decision to delay the implementation of high-risk rules has sparked criticism from civil rights advocates and some legal experts who fear it creates a dangerous regulatory vacuum. Because the AI Act is generally not retroactive, systems placed on the market before the new 2027 and 2028 deadlines may be "grandfathered" in, remaining exempt from the law’s strictest requirements unless they undergo a substantial modification.[2] Critics argue this could trigger a "race to market" where companies rush to deploy potentially flawed or biased systems before the window closes.[2] There are also concerns that by the time the most rigorous rules apply in 2028, the underlying technology will have evolved so significantly that the current regulatory categories may already be obsolete.
The implications for the global AI industry are profound, as the European Union has long been seen as the world’s "first mover" in tech regulation. For years, the "Brussels Effect" has led global firms to adopt EU standards as their baseline to ensure access to the bloc’s 450 million consumers. By delaying and simplifying its rules, the EU is signaling a shift from a "regulate first" posture toward a more cautious strategy that prioritizes industrial competitiveness and implementation feasibility. For multinational tech companies, the delay offers a much-needed breathing room to overhaul their internal governance structures, but it also introduces new uncertainties. Companies must now navigate a fragmented timeline where different rules—ranging from deepfake labeling to high-risk assessments—kick in at intervals over the next four years.[8]
Ultimately, Europe's answer to the complexity of AI regulation is a tactical retreat designed to ensure the long-term survival of its legislative model. The original ambition of the AI Act remains intact, but the path to enforcement has been stretched out to accommodate the reality of a world where technological progress moves faster than the bureaucratic processes of the state. As the EU waits for technical standards to be finalized and for national watchdogs to be fully staffed, the global community will be watching to see if this delay fosters a more stable innovation ecosystem or if it simply postpones the inevitable tensions between the desire for safety and the drive for technological dominance. For now, the message from Brussels is clear: the rules are coming, but the complexity of the task means the era of comprehensive AI oversight will start later than originally promised.
Sources
[1]
[3]
[5]
[7]
[8]
[10]
[11]
[12]
[13]
[14]