Some legal AI tools are high-risk under Annex III (administration of justice). Others fall into gray areas where classification depends on implementation. The wrong call could mean non-compliance or unnecessary over-engineering. Find out where your product sits in 60 seconds.
Legal tech is uniquely tricky under the EU AI Act because similar products can land in completely different risk tiers depending on how they're used.
Annex III, point 8 (administration of justice and democratic processes)
May or may not be high-risk depending on implementation
Article 50 transparency obligations apply
Most common question in legal tech
Contract review AI sits in a gray area. If it just highlights clauses for human review, it's likely limited risk. If it makes binding recommendations, flags legal risk scores that influence decisions, or auto-redlines terms without human intervention, it could be high-risk. The classification depends on how much autonomy the AI has in the decision chain.
This is exactly why running the classifier matters. The answer depends on your specific implementation, not a blanket rule.
High-risk legal AI must meet all 10 mandatory obligations before August 2, 2026. Fines reach €15 million or 3% of global turnover.
If your firm deploys third-party legal AI, you're a deployer under the EU AI Act with your own set of binding requirements.
Full provider obligations
Deployer obligations