← Back to blog
7 min read

EU AI Act Compliance Checklist for 2026

The August 2, 2026 deadline for high-risk AI systems is approaching. This checklist covers every obligation that providers and deployers of high-risk AI must meet — organised by priority. If you haven't classified your AI systems yet, start there. Everything else only applies to systems that are actually high-risk.

Step 0: Classify your AI systems

Before any compliance work, confirm which of your systems are high-risk under Annex III. Use ActReady's free classifier at getactready.com/classify — it takes 60 seconds per system and requires no legal knowledge. Only continue with this checklist for systems confirmed as high-risk.

For providers (companies that build high-risk AI)

Risk management system (Article 9)

  • Documented risk identification methodology
  • Risks identified and assessed for each intended use
  • Residual risks documented after mitigation measures
  • Risk management process covers the full system lifecycle
  • Risk management plan reviewed and updated when system changes

Data governance (Article 10)

  • Training, validation, and test datasets documented
  • Data provenance and collection methods recorded
  • Bias detection and mitigation measures documented
  • Personal data handling documented and GDPR-compliant
  • Data quality measures in place

Technical documentation (Article 11 + Annex IV)

  • All nine Annex IV sections completed
  • System architecture and design documented
  • Training methodologies and techniques documented
  • Validation and testing procedures recorded
  • Known limitations and failure modes recorded
  • Documentation kept up to date when system changes

Automatic event logging (Article 12)

  • System logs events automatically throughout operation
  • Logs are sufficient to identify and investigate incidents
  • Log retention period defined and appropriate

Human oversight measures (Article 14)

  • Technical measures enable human oversight of the system
  • Users can monitor, understand, and intervene in system outputs
  • Mechanisms to pause or override the system are available

Conformity assessment & registration

  • Conformity assessment completed (self-assessment or third-party)
  • Declaration of conformity prepared
  • System registered in the EU AI database before market placement

Post-market monitoring (Article 72)

  • Post-market monitoring plan prepared
  • Data collection and analysis procedures defined
  • Serious incident reporting procedures in place
  • Process for implementing corrective actions documented

For deployers (companies that use high-risk AI built by others)

  • Verified that the AI system has a valid declaration of conformity
  • Read and implemented the provider's instructions for use
  • Human oversight measures implemented as required
  • Staff trained to use the system appropriately
  • Process for reporting serious incidents to the provider
  • Data protection impact assessment completed (if processing personal data)

Quick wins to do today

If you're starting from scratch, prioritise in this order:

  • Classify your systems — free, takes minutes, tells you exactly what you're dealing with
  • Generate technical documentation — ActReady produces a structured first draft for each system
  • Document your risk management process — even a basic documented process is better than none
  • Set up logging — if your system doesn't log events, add it now
  • Draft your instructions for use — a clear document for deployers explaining limitations and oversight requirements

The August 2026 deadline is fixed. Starting now gives you time to do this properly rather than scrambling in the final weeks. Get started free at getactready.com/classify.

Check your AI system's risk level for free

Our classifier maps your AI system against the EU AI Act in under 60 seconds. No signup required.

Classify Your AI System