What Actually Happens If You Miss the EU AI Act Deadline?
August 2, 2026 is the date most businesses with high-risk AI systems need to be compliant. But what actually happens after that date if you're not? The answer is more nuanced than "you get fined" — but also more serious than many companies assume.
Who enforces the EU AI Act?
Enforcement is decentralised across EU member states. Each country designates at least one national competent authority — a market surveillance authority — responsible for overseeing AI systems in their jurisdiction. Germany, France, Spain, Italy, and Ireland (important for US companies with EU subsidiaries) each have designated authorities. At the EU level, the European AI Office oversees enforcement for general-purpose AI models.
There is no automated penalty system. Fines are issued following an investigation — typically triggered by a complaint, an incident, or proactive market surveillance.
What enforcement actually looks like
Step 1: Investigation
A national authority receives a complaint or identifies a potentially non-compliant system through market surveillance. They contact the provider (or their authorised EU representative) and request access to technical documentation, risk management records, and evidence of conformity assessment.
Step 2: Corrective action request
Before fines, authorities typically issue a corrective action request — a notice requiring the provider to address identified violations within a specified time. First-time violations with a cooperative provider are more likely to result in a corrective order than an immediate fine.
Step 3: Enforcement action
If violations are not corrected, or if the violation is serious, authorities can:
- Issue fines (see below)
- Order the AI system withdrawn from the EU market
- Order the AI system recalled from deployers
- Issue public warnings about the non-compliant system
- Refer serious cases to criminal authorities (Article 5 violations particularly)
The fine tiers
- Prohibited practices (Article 5): Up to €35 million or 7% of global annual revenue — whichever is higher. These fines are for the most serious violations: social scoring, real-time biometric surveillance, subliminal manipulation.
- High-risk system violations (Articles 9–17): Up to €15 million or 3% of global annual revenue. Missing technical documentation, no risk management system, inadequate human oversight.
- Providing false information to authorities: Up to €7.5 million or 1.5% of global annual revenue.
The regulation requires that fines be "effective, proportionate and dissuasive." For SMBs, Article 99(6) explicitly states that fines should take into account the company's size, economic situation, and whether the violation was intentional. A small startup making genuine compliance efforts will not receive the same treatment as a large company knowingly deploying prohibited AI.
What about the "enforcement won't be strict at first" theory?
There is a widespread assumption that enforcement will be light in the first year while national authorities build capacity. This is probably true for minor technical violations — an incomplete section in your technical documentation, a slightly ambiguous disclosure notice.
It is not true for high-profile violations or systems that have already caused harm. If your HR AI system discriminates, if your credit scoring tool produces biased outcomes that harm EU consumers, or if your emotion recognition product is deployed in workplaces — expect enforcement to be prompt.
The EU has explicitly prioritised building enforcement capacity. The European AI Office is staffed and active. Several member states have had dedicated AI oversight units for years. This is not GDPR in 2018, where enforcement infrastructure barely existed. The infrastructure is ready.
The commercial consequences come first
For most companies, regulatory enforcement is the second threat. The first threat is commercial: enterprise customers — particularly large EU companies subject to their own deployer obligations under the AI Act — are already requiring their AI vendors to demonstrate compliance. Non-compliant vendors get dropped from vendor lists before they get fined by regulators.
RFPs, procurement questionnaires, and security assessments from EU enterprise customers are increasingly including EU AI Act compliance as a requirement. A company that cannot demonstrate compliance after August 2026 will face sales friction that is immediate, concrete, and growing.
Is it too late to start now?
No — but only if you start now. With 93 days to the deadline:
- Minimal and limited risk compliance can absolutely be completed in time
- High-risk compliance is tight but achievable with focused effort and the right tools
- Companies that wait another 60 days will face a genuinely difficult situation
Start with a free classification at getactready.com/classify. It takes 60 seconds and tells you exactly what tier you're in and what needs to happen next. That 60 seconds is the difference between starting today and starting when it's genuinely too late.
Stay ahead of the deadline
Get EU AI Act updates, enforcement news, and compliance guides delivered to your inbox. No spam — unsubscribe any time.
Check your AI system's risk level for free
Our classifier maps your AI system against the EU AI Act in under 60 seconds. No signup required.
Classify Your AI System