← Back to blog
8 min read

EU AI Act Article 9: How to Build a Risk Management System for High-Risk AI

Article 9 of the EU AI Act is one of the most demanding obligations for high-risk AI providers. It requires a risk management system — not a one-time risk assessment, but an ongoing process that runs throughout the entire lifecycle of the AI system. This is fundamentally different from the risk documentation most companies are used to producing.

What Article 9 actually requires

The regulation is specific. Your risk management system must:

  • Be a continuous, iterative process — not a point-in-time audit
  • Cover the entire system lifecycle from design through decommissioning
  • Identify and analyse known and reasonably foreseeable risks to health, safety, and fundamental rights
  • Estimate and evaluate risks that may emerge when the system is used as intended, and when it is reasonably foreseeable that it will be misused
  • Evaluate risks based on post-market monitoring data once the system is live
  • Adopt appropriate and targeted risk management measures

This is closer to the ongoing risk management approach required in medical devices regulation than anything typically seen in software development.

The difference between a risk assessment and a risk management system

Most companies have risk assessments. A risk assessment is a document produced at a point in time, reviewed occasionally, and updated when something significant changes. Article 9 is not asking for this.

A risk management system is a process. It has defined inputs (monitoring data, incident reports, near misses, user feedback), defined outputs (risk register updates, mitigation actions, escalation decisions), defined owners, and defined review cadences. It runs continuously, not annually.

Think of it less like completing a compliance form and more like running a security incident response programme — except for AI-specific risks.

What risks must be covered

Risks from intended use

What harm could your system cause when working exactly as designed? For a credit scoring model, this includes the risk of denying credit to eligible applicants due to model error, or of systematically disadvantaging protected groups due to biased training data.

Risks from reasonably foreseeable misuse

How might a user apply your system in ways you did not intend but should have anticipated? If you build an AI interviewing tool, a foreseeable misuse is applying it to candidates with disabilities in ways that introduce unfair barriers. Your risk management system must address this even if you do not endorse that use.

Risks from the environment of use

The same AI system deployed in different contexts can carry different risks. A document review tool used by a small law firm carries different risk profiles than the same tool deployed in a public prosecution service. Your risk documentation should reflect your actual deployment context.

Risks from model degradation

AI models degrade over time as the world changes and the data distribution shifts. A credit model trained in 2022 may perform poorly on 2026 data. Your risk management system must include mechanisms for detecting and responding to model drift.

Building the risk management process

Step 1: Risk identification

Start by documenting every known and foreseeable risk associated with your system. Involve people who understand the technical model, the deployment context, and the affected population. This should produce a risk register — a living document, not a static list.

Step 2: Risk analysis and estimation

For each identified risk, estimate the likelihood and severity of harm. The Act requires you to consider both the probability that a hazard leads to harm and the seriousness of that harm, including how many people could be affected and whether it is reversible.

Step 3: Risk mitigation measures

For each risk, define the mitigation. This might include technical controls (accuracy thresholds, confidence scoring, automatic rejection of low-confidence outputs), procedural controls (mandatory human review for certain decision types), or product controls (limiting who can use the system and for what).

Step 4: Residual risk evaluation

After mitigations are in place, the residual risk must still be acceptable. The Act requires that overall residual risk is judged acceptable based on the generally acknowledged state of the art — meaning you need to know what the current standard of risk management for your type of system looks like.

Step 5: Continuous monitoring and update

Once live, your risk management system must incorporate real-world performance data. Set up monitoring for accuracy degradation, demographic disparity in outputs, user complaints, and near-miss incidents. Define thresholds that trigger a formal review and update of the risk register.

What documentation you need

Article 9 requires that the risk management process be documented. At minimum, your documentation should include:

  • The risk register with identified risks, severity assessments, and mitigation measures
  • Evidence that risks from intended use and foreseeable misuse were both considered
  • The residual risk evaluation and the basis for concluding it is acceptable
  • The monitoring process and review cadence
  • Records of risk register updates and the data that triggered them

This documentation forms part of your Annex IV technical file and will be reviewed in any conformity assessment or regulatory inspection.

How Article 9 connects to the rest of the Act

Article 9 does not sit in isolation. Your risk management system feeds directly into your technical documentation (Annex IV), your post-market monitoring plan (Article 72), and your incident reporting obligations (Article 73). Getting Article 9 right makes the rest of the compliance work significantly easier — because it gives you the structure to generate and update the required documentation continuously rather than scrambling to produce it before a deadline.

The ActReady compliance tracker at getactready.com/dashboard tracks your Article 9 obligations alongside the other nine high-risk requirements, with document generation for your risk management plan built in.

Stay ahead of the deadline

Get EU AI Act updates, enforcement news, and compliance guides delivered to your inbox. No spam — unsubscribe any time.

Check your AI system's risk level for free

Our classifier maps your AI system against the EU AI Act in under 60 seconds. No signup required.

Classify Your AI System