The EU Artificial Intelligence Act (EU AI Act) is the world’s first comprehensive regulatory framework for AI, adopted in May 2024 and in force since August that year. Its approach is famously risk-based: it bans ‘unacceptable risk’ systems outright, imposes strict controls on ‘high-risk’ systems in sensitive fields such as healthcare or hiring, and sets transparency rules for ‘limited-risk’ AI. Minimal-risk systems remain largely untouched.
The Act acknowledges its own complexity. It sets staggered compliance dates through 2025, with most General-Purpose AI (GPAI) obligations kicking in from 2 August 2025. Crucially, it also recognises that not all the required technical standards exist yet. Full harmonised standards will only appear in 2027 or later. To fill this gap, Article 56 requires Codes of Practice. Drafted collaboratively under the Commission’s AI Office, the GPAI Code(s) is meant to guide providers through their obligations in the interim, giving them a presumption of conformity if they follow it.
A growing chorus for a ‘pause’
Despite the ambitious vision and the plans for detailed Codes of Practice, calls to ‘pause’ or delay aspects of the AI Act have grown louder in recent months as the enforcement deadline approaches. Swedish Prime Minister Ulf Kristersson publicly suggested a pause in the rollout of obligations, warning that the regulation is ‘confusing’ and potentially damaging for Europe’s competitiveness. The CEOs of Airbus, BNP Paribas and trade groups like CCIA Europe – which represents tech giants including Google, Apple, Meta and Amazon – have also now urged a halt to the implementation timeline, arguing the law is outpacing industry’s ability to comply.
At the heart of these calls lies a fear that the EU is regulating faster than it can clarify. The argument is that without clear technical guidance and operational detail, enforcement will be arbitrary, uneven or simply impossible. European startups and VCs in a joint letter asking for a ‘stop-the-clock’ moment, warned that vague obligations and uncertain audits would deter investment, incentivise companies to leave Europe, and tilt the playing field in favour of large incumbents who can better absorb AI compliance costs.
These arguments hit a nerve partly because the law itself acknowledges the problem: without agreed technical standards or a functioning Code of Practice, regulators and providers alike risk enforcing or interpreting rules inconsistently. The fear is that an uncoordinated, under-resourced enforcement regime will punish smaller players while leaving bigger actors relatively unscathed.
Why ‘pause’ arguments deserve attention
It would be easy for defenders of the AI Act to dismiss complaints as just self-serving lobbying. Undeniably, some of the loudest voices for a pause come from Big Tech companies with vested interests in avoiding regulation altogether (and a long track record of being successful in that aim). But there is a genuine problem here too: the AI Act’s obligations for GPAI providers are undeniably broad and, in their current form, under-defined.
The obligations for transparency for example: providers must document training data (including copyrighted materials), explain design choices, report incidents, and disclose safety testing. For systemic-risk models, there are additional requirements around risk management, red-teaming, and cybersecurity. But how, precisely, are those tasks to be performed?
Without a shared template, providers may over- or under-comply. National regulators may interpret duties inconsistently. Auditors may lack criteria for assessment. The entire concept of a “presumption of conformity” under the Code of Practice only works if the Code itself is clear, complete, and accepted across the EU.
The fact is, as of 1st July 2025, the Code of Practice is not yet final. It has gone through multiple drafts – each incorporating feedback from industry, academia, civil society and member states. Working groups under the AI Office (in which EthicAI itself is represented) have been convened to refine four critical areas: transparency and copyright, risk assessment, technical risk mitigation, and governance. Close to 1,000 participants are involved in shaping it. But the final text is expected mere weeks before key GPAI obligations come into force.
Against this backdrop, calls to pause the AI Act’s GPAI enforcement don’t seem unreasonable. The fear is that without sufficient clarity, the Act will become a bureaucratic drag. Startups with limited compliance resources will hesitate to launch in Europe. Investors may divert funds to less regulated markets. Big Tech firms might adapt more easily, but the overall goal of a competitive, innovative AI ecosystem in Europe could be undermined.
Meanwhile, national regulators themselves are building capacity on the fly. The EU AI Office is being stood up. Resources, expertise, and consistent interpretation of the law will not emerge overnight. The danger is that rushed or uneven enforcement damages trust not just in the Act, but in the EU’s entire approach to digital regulation.
The case against any pre-emptive ‘pause’
But even these risks don’t automatically justify just slamming on the brakes. A blanket ‘pause’ in implementation would undermine the credibility of the EU’s promise to lead on AI regulation. It would send exactly the wrong message at a time when the public are deeply concerned about the potential harms of unregulated AI – from bias and discrimination to misinformation, privacy breaches, and existential risk.
The AI Act isn’t simply an industrial policy tool. It is a fundamental-rights regulation, designed to protect the public from dangerous or opaque AI systems. Abandoning or delaying its core commitments would betray that mission. And, the Act was deliberately written to accommodate the very problem critics highlight. Its phased timeline and the requirement for a GPAI Code of Practice are explicit acknowledgements that clarity takes time. The law is flexible enough to absorb new guidance, new standards, and new technological developments. It is not frozen in time.
Rather than pausing the Act altogether, a better path is surely to let the Code of Practice process play out, then assess whether it delivers the clarity providers need.
Rather than calling for a halt now – before that work is even complete – it makes more sense to wait for the final Code of Practice to be published (shortly) and scrutinised. Only then can policymakers and industry fairly assess whether compliance is achievable, whether guidance is clear enough, and whether enforcement timelines remain realistic.
This approach doesn’t give the Act’s critics a free pass. It doesn’t assume the Code will be perfect, or that no further delays or adjustments will ever be needed. But it recognises that the current moment is premature for any decisive move.
What should the final decision hinge on?
- Whether the Code of Practice provides enough detail and usability for providers to comply without guesswork
- Whether national regulators can implement and enforce it consistently
- Whether the Code sufficiently reflects the risk profiles of GPAI systems, from small-scale chatbots to large foundation models
- Whether it avoids favouring big incumbents at the expense of SMEs and startups
- Whether it delivers the necessary public accountability for AI systems with genuine systemic risk
If the questions are answered well, then there can be no justification for a pause. If they are not, then a targeted, limited delay on specific obligations might be warranted later. But that is a decision for when the Code is final, not now.
The EU AI Act is, by design, adaptable. It recognises that regulation must keep pace with technology and that technical standards must mature over time. The General-Purpose AI Code of Practice is the critical bridge between the high-level law and practical, day-to-day compliance.
This isn’t about reflexively defending regulation or caving to lobbying demands. It’s about acknowledging the complexity of the task at hand. AI is too consequential a technology to leave unregulated – but too consequential to regulate sloppily. Let’s see if the Code provides the clarity everyone needs.