← Back to blog
6 min read

EU AI Act: Two Deadlines, Two Compliance Tracks — What to Do First

On May 7, 2026, the EU Digital Omnibus provisional agreement split the EU AI Act enforcement timeline into two distinct tracks. Companies now face a dual-deadline reality that requires different compliance work on different timelines. This guide breaks down what to prioritize and when.

The two deadlines

August 2, 2026 (83 days) — Article 50 transparency obligations. This covers chatbot disclosure, AI-generated content labeling, deepfake marking, and emotion recognition notification. This deadline was NOT extended by the Digital Omnibus.

December 2, 2027 (570 days) — Annex III high-risk AI obligations. This covers technical documentation, risk management systems, conformity assessments, human oversight, data governance, event logging, and EU database registration. This deadline was extended 16 months from the original August 2026 date.

Two other enforcement dates are already in the past: Article 5 prohibited practices (February 2025) and Chapter V GPAI model obligations (August 2025). If you haven't addressed these yet, they should be at the top of your list regardless.

Track 1: Transparency (now through August 2026)

This is your immediate priority. Article 50 applies to any AI system that interacts directly with people or generates content. In practical terms, if your product does any of the following, you have work to do before August:

  • Chatbots and virtual assistants: Users must be informed they are interacting with AI before the conversation begins. A clear, visible disclosure before the first message.
  • AI-generated text, images, audio, or video: Content must be machine-readable labeled as AI-generated. If it could reasonably be mistaken for human-created content, it needs marking.
  • Deepfakes: Any AI-generated or manipulated content depicting real people must be clearly disclosed.
  • Emotion recognition systems: If you use AI to detect emotions (even as a secondary feature), individuals must be informed before exposure.

The good news: transparency compliance is typically a few days of engineering work. A disclosure banner, updated terms, and content labeling metadata. The bad news: it has to actually be done, and83 days goes fast.

Track 2: High-risk (now through December 2027)

If your AI system is classified as high-risk under Annex III — employment and HR, credit scoring, insurance, education, critical infrastructure, law enforcement, migration — you now have until December 2, 2027. This is where the heavy compliance work lives:

  • Risk management system (Article 9)
  • Data governance and bias documentation (Article 10)
  • Full Annex IV technical documentation (9 sections)
  • Automatic event logging (Article 12)
  • Transparency and instructions for deployers (Article 13)
  • Human oversight measures (Article 14)
  • Accuracy, robustness, and cybersecurity (Article 15)
  • Conformity assessment (Article 43)
  • EU database registration (Article 49)
  • Post-market monitoring plan (Article 72)

This work takes 3 to 6 months minimum for most companies starting from scratch. Which means the smart window to begin is now through mid-2027. Companies that wait until Q3 2027 will be in the same scramble that everyone was expecting for August 2026.

The priority matrix

Here's the order of operations:

Do immediately (this month)

  • Classify every AI system in your product — free at getactready.com/classify
  • Identify which systems have transparency obligations (August deadline)
  • Identify which systems are high-risk (December 2027 deadline)
  • Check prohibited practices compliance — this has been enforceable since February 2025

Do before August 2, 2026

  • Add AI disclosure to all chatbots and virtual assistants
  • Implement AI-generated content labeling
  • Update deepfake policies if applicable
  • Add emotion recognition disclosure if applicable

Start now, complete before December 2, 2027

  • Begin technical documentation for high-risk systems
  • Establish your risk management system (Article 9)
  • Audit and document training data governance
  • Implement event logging infrastructure
  • Build human oversight mechanisms into your product
  • Prepare for conformity assessment

The commercial reality

One thing the Digital Omnibus did not extend: enterprise procurement pressure. Large EU companies with their own deployer obligations are already asking AI vendors for compliance evidence. RFPs, security assessments, and procurement questionnaires increasingly reference EU AI Act compliance. This pressure exists regardless of whether the regulatory deadline is 83 days or 571 days away.

Companies that can demonstrate active compliance work — even if they're not fully compliant yet — will have a significant commercial advantage over those who treated the extension as permission to do nothing.

How to use the extra time

The December 2027 extension gives you time to do high-risk compliance properly instead of rushing it. That means:

  • Building genuine risk management processes, not just paperwork
  • Running thorough bias audits on your training data
  • Getting early feedback on your technical documentation
  • Testing human oversight mechanisms with real users
  • Working through conformity assessment requirements methodically

Start by classifying your systems at getactready.com/classify. It takes 60 seconds, tells you exactly which deadline applies to each system, and gives you a clear list of obligations to work through.

Stay ahead of the deadline

Get EU AI Act updates, enforcement news, and compliance guides delivered to your inbox. No spam — unsubscribe any time.

Check your AI system's risk level for free

Our classifier maps your AI system against the EU AI Act in under 60 seconds. No signup required.

Classify Your AI System