← Back to blog
7 min read

EU AI Act for Legal Tech: What AI Used in Legal Practice Must Comply With

Legal technology has become one of the fastest-growing AI application areas. Contract review, legal research, e-discovery, document drafting, litigation prediction — AI is embedded in legal workflows at almost every tier of the market. The EU AI Act treats legal AI with particular attention, and for good reason: AI that affects legal outcomes can have profound consequences for individuals and organisations alike.

Here is a precise breakdown of which legal AI tools are high-risk, which fall into limited risk, and what obligations apply to each.

The Annex III category that governs legal AI

The EU AI Act's Annex III, point 8 defines a specific high-risk category: "AI systems intended to be used by a judicial authority or on their behalf to research and interpret facts and the law and to apply the law to a concrete set of facts, or to be used in a similar way in alternative dispute resolution."

This is narrower than it first appears — it specifically targets AI assisting courts and dispute resolution bodies, not all legal AI. But the adjacent categories in Annex III catch a much wider range of legal tech products.

High-risk legal tech: what's covered

AI used in judicial and dispute resolution contexts

If your product is used by courts, arbitration panels, or alternative dispute resolution bodies to interpret facts, research law, or apply law to specific cases, it is high-risk under point 8. This includes AI legal research tools marketed to judiciary users, AI document analysis used in court proceedings, and AI-assisted sentencing or case outcome prediction tools.

Employment AI used by law firms and legal departments

Annex III, point 4 covers AI used in employment decisions. Law firms and legal departments using AI to screen candidate applications, assess lawyer performance, or rank candidates for promotion are subject to high-risk obligations — regardless of whether the AI was built specifically for legal settings.

AI used in access to essential legal services

Annex III, point 5 covers AI used to assess eligibility for essential private and public services. AI that determines whether someone qualifies for legal aid, evaluates creditworthiness for legal finance, or decides access to legal insurance products falls into this category.

The gray area: contract review, legal research, and e-discovery

The most common legal AI tools — contract review, legal research assistants, e-discovery platforms — are not automatically high-risk. Their classification depends on how they are used and by whom.

Contract review AI

A contract review tool that flags issues for human lawyers to assess is likely limited risk — it assists human judgment rather than replacing it. The decisive factor is human oversight: if a lawyer reviews and approves every recommendation, the AI is a drafting assistant. If the tool's output directly determines contract terms without meaningful human review, the analysis changes. The more the tool informs binding decisions, the stronger the argument for high-risk classification.

Legal research AI

AI legal research assistants (tools that find relevant cases, statutes, and precedents) are generally limited risk — they inform a lawyer's analysis but do not make legal determinations. The transparency obligation under Article 50 still applies if the tool uses a conversational interface: users must know they are interacting with AI.

E-discovery AI

E-discovery document review tools sit in a gray area. Predictive coding and relevance ranking tools that assist human reviewers are likely limited risk. However, e-discovery tools used in proceedings before judicial authorities — where the AI's document selection directly informs legal arguments — move closer to the Annex III point 8 definition and warrant careful classification.

Obligations for high-risk legal tech

If your legal AI product is high-risk, the full Article 9–17 framework applies:

  • Risk management system (Article 9): A documented, continuous process for identifying and mitigating risks — including risks of discriminatory outcomes and factual inaccuracies that could affect legal proceedings.
  • Technical documentation (Annex IV): Comprehensive documentation of your system's architecture, training data, validation methodology, and performance metrics. For legal AI, particular attention to bias testing across demographic groups is critical.
  • Data governance (Article 10): Training data must be documented for provenance, quality measures, and bias detection. Legal training data raises additional issues around privilege and confidentiality that must be addressed.
  • Human oversight (Article 14): The system must be designed to allow human review of outputs. For legal AI, this means building interfaces where lawyers can interrogate, override, and document their review of AI outputs.
  • Accuracy and robustness (Article 15): Legal AI must be tested against the full range of inputs it will encounter, with documented accuracy metrics. Hallucination rates are a critical concern — an AI that invents case citations has a severe accuracy problem under Article 15.
  • Conformity assessment: Required before going to market. Legal AI in the Annex III point 8 category requires third-party assessment by a notified body, not just self-assessment.
  • EU AI Act database registration (Article 49): High-risk legal AI must be registered in the EU's public AI database before market placement.

Obligations for limited-risk legal tech

Contract review tools, legal research assistants, and e-discovery platforms that do not fall into a high-risk category still have obligations under Article 50:

  • AI-generated content must be disclosed as AI-generated
  • Conversational AI interfaces must identify themselves as AI
  • Deep fakes of legal documents or testimony must be labelled as synthetic

Even minimal-risk legal AI tools benefit from maintaining documentation of their classification rationale — if a regulator or client questions whether your tool should be high-risk, documented classification logic is your first line of defence.

What legal tech companies should do now

  • Classify each product separately. A firm offering both a court-facing litigation prediction tool and a contract drafting assistant has two very different compliance requirements. Each system needs its own classification.
  • Audit your customer base. Even if your product is limited risk in most uses, if judicial authorities are among your customers, the Annex III point 8 definition may apply to those deployments specifically.
  • Address hallucination in your risk documentation. Legal AI with accuracy issues is a high-profile enforcement target. Document your accuracy testing methodology and known limitations explicitly.
  • Update client contracts. As a provider of AI systems, your agreements with law firm and legal department deployers should address EU AI Act obligations — specifically who is responsible for human oversight implementation and incident reporting.

The free classifier at getactready.com/classify is a fast starting point for understanding your product's risk tier. For legal tech companies with complex product portfolios, classifying each system separately is the only way to get an accurate picture of your compliance obligations.

Stay ahead of the deadline

Get EU AI Act updates, enforcement news, and compliance guides delivered to your inbox. No spam — unsubscribe any time.

Check your AI system's risk level for free

Our classifier maps your AI system against the EU AI Act in under 60 seconds. No signup required.

Classify Your AI System