← Back to blog
7 min read

How to Classify AI Systems Under the EU AI Act: A Step-by-Step Guide

The EU AI Act uses a risk-based approach to regulate artificial intelligence. Not all AI is treated equally — the regulation assigns each system to one of four risk tiers, and each tier carries different obligations. Getting your classification right is the foundation of everything that follows.

The four risk tiers

The EU AI Act defines four levels of risk. Your compliance obligations are determined entirely by which tier your AI system falls into.

Tier 1: Prohibited (Unacceptable risk)

Some AI applications are banned entirely within the EU. These include:

  • Social scoring systems used by public authorities
  • Real-time remote biometric identification in public spaces (limited law enforcement exceptions)
  • AI that manipulates human behavior to circumvent free will
  • AI that exploits vulnerabilities of specific groups (age, disability)
  • Emotion recognition systems in workplaces and educational institutions
  • Untargeted scraping of facial images from the internet or CCTV

If your system falls here, it cannot be deployed in the EU. Period.

Tier 2: High-risk

This is the most complex tier and where the bulk of compliance work lives. High-risk classification is triggered in two ways:

Annex III categories: AI systems in these eight domains are high-risk by default:

  • Biometrics: Remote biometric identification, biometric categorization, emotion recognition
  • Critical infrastructure: AI managing safety components of roads, energy, water, gas, heating, internet
  • Education: AI determining access to education, evaluating learning outcomes, monitoring cheating
  • Employment: AI for recruitment, screening, hiring decisions, task allocation, performance monitoring, termination
  • Essential services: AI for credit scoring, insurance pricing, emergency dispatch, benefit eligibility
  • Law enforcement: Polygraphs, evidence evaluation, crime prediction, profiling
  • Migration: Border control, visa processing, asylum applications
  • Justice & democracy: AI assisting judicial decisions, influencing elections

Article 6 criteria: AI systems that serve as a safety component of a product covered by existing EU harmonization legislation (Annex I) are also high-risk.

Tier 3: Limited risk

Systems that interact directly with people but don't fall into high-risk categories face transparency obligations:

  • Chatbots must disclose that the user is interacting with AI
  • Deepfake content must be labeled as AI-generated
  • Emotion recognition systems (outside prohibited contexts) must inform subjects
  • AI-generated text published to inform the public must be labeled

Tier 4: Minimal risk

Most AI systems fall here — recommendation engines, spam filters, AI-assisted search, inventory optimization, content personalization. No mandatory obligations, though the EU encourages voluntary codes of conduct.

How to classify your AI system: step by step

Step 1: Define what your AI system does

Be specific about what the system does, what data it processes, and what decisions or outputs it produces. A vague description leads to vague classification. Document the system's intended purpose, its domain of application, and who is affected by its outputs.

Step 2: Check the prohibited list first

Before anything else, verify your system isn't on the prohibited list. If it involves social scoring, subliminal manipulation, exploitation of vulnerabilities, or banned biometric uses, stop here. The system cannot be deployed in the EU.

Step 3: Check Annex III categories

Map your system's domain and use case against the eight Annex III categories. If your system falls within any of these domains and performs the specified functions, it is high-risk. Pay close attention to the specifics — not all AI in healthcare is high-risk, but AI that determines access to health insurance is.

Step 4: Check Article 6 (safety components)

If your AI system is a safety component of a product covered by existing EU product safety legislation (machinery, medical devices, toys, vehicles, etc.), it may be high-risk under Article 6 regardless of its domain.

Step 5: Check transparency obligations

If your system isn't high-risk but interacts with people, generates content, or performs emotion recognition, it's limited risk with transparency obligations.

Step 6: Document your classification rationale

Whatever tier you land on, document why. If a regulator asks, you need to show your reasoning. This is especially important if you classify a system as minimal or limited risk that could arguably be high-risk.

Common classification mistakes

  • Ignoring indirect EU exposure. Your company is in the US but your SaaS has EU users? You're in scope.
  • Overlooking third-party AI. If you deploy someone else's AI model, you may still have deployer obligations.
  • Assuming "internal use" means exempt. AI used for internal HR decisions (hiring, performance) is high-risk regardless.
  • Conflating the system with the company. Classification applies per AI system, not per company. You might have systems across multiple tiers.

What comes after classification

Once you know your tier, the path is clear:

  • Minimal risk: No mandatory action. Consider voluntary best practices.
  • Limited risk: Implement transparency notices and disclosures.
  • High-risk: Full compliance stack — technical documentation (Annex IV), risk management (Article 9), data governance (Article 10), human oversight (Article 14), accuracy and robustness requirements, conformity assessment, and post-market monitoring.
  • Prohibited: Discontinue or fundamentally redesign the system.

Automate your classification

ActReady's free classifier walks you through a guided questionnaire that maps your AI system against the regulation's criteria. In under 60 seconds, you get your risk tier, applicable obligations, and next steps. No signup, no email required.

Check your AI system's risk level for free

Our classifier maps your AI system against the EU AI Act in under 60 seconds. No signup required.

Classify Your AI System