EU AI Act Prohibited AI Practices: The Complete List for 2026
The EU AI Act takes a risk-based approach to AI regulation — but for a small category of AI practices, there is no risk assessment, no compliance path, and no exceptions. Article 5 of the regulation establishes a list of AI practices that are prohibited outright in the EU. If your AI system falls into any of these categories, it cannot be deployed in the EU — full stop.
These prohibitions have been in force since February 2, 2025.
The complete list of prohibited AI practices
1. Subliminal manipulation
AI systems that deploy subliminal techniques beyond a person's consciousness to materially distort their behaviour in a way that causes or is likely to cause that person or another person physical or psychological harm. This covers AI that influences people without their awareness — through hidden audio cues, imperceptible visual stimuli, or techniques that bypass conscious decision-making.
2. Exploitation of vulnerabilities
AI systems that exploit vulnerabilities of specific groups — including people due to their age, disability, or specific social or economic situation — to materially distort their behaviour in a harmful way. This covers predatory AI targeting children, elderly people, or people in financial distress in ways that damage their interests.
3. Social scoring by public authorities
AI systems used by public authorities (or on their behalf) to evaluate or classify individuals based on their social behaviour or personal characteristics, where this scoring leads to detrimental treatment that is unrelated to the context in which the data was collected, or disproportionate to the severity of the social behaviour. China-style social credit systems are the clearest example.
4. Real-time remote biometric identification in public spaces
AI systems for real-time remote biometric identification of natural persons in publicly accessible spaces for law enforcement purposes are prohibited — with narrow exceptions. The exceptions cover targeted searches for specific victims of crime, prevention of specific and imminent terrorist threats, and identification of perpetrators of serious crimes. Even with exceptions, use requires prior judicial or administrative authorisation in most cases.
This prohibition covers law enforcement use only. Private-sector real-time biometric identification in public spaces is not covered by this specific prohibition but may be restricted under GDPR and other legislation.
5. Post-remote biometric identification (with one exception)
AI systems for remote biometric identification of natural persons in publicly accessible spaces other than in real time ("post-remote" or retrospective biometric identification) are also prohibited for law enforcement use — except for the prosecution of serious criminal offences, subject to judicial authorisation.
6. Emotion recognition in workplaces and educational institutions
AI systems that infer the emotions of natural persons in the context of workplaces and educational institutions are prohibited. This covers systems that analyse facial expressions, voice tone, or physiological signals to determine employee emotional states, monitor student engagement through emotion detection, or assess worker productivity via inferred affect.
Exceptions exist for AI systems put in place for medical or safety reasons — for example, monitoring a driver for signs of fatigue.
7. Biometric categorisation based on sensitive characteristics
AI systems that categorise natural persons based on their biometric data to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs, sex life, or sexual orientation. Inferring protected characteristics from biometric data for any purpose is prohibited.
8. Untargeted scraping of facial images
AI systems that create or expand facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage. This directly prohibits the business model of companies that bulk-harvest facial images to build recognition databases without any targeted purpose.
What the prohibitions mean for most companies
The majority of commercial AI systems do not come close to these prohibitions. If you are building a SaaS product, a recommendation engine, a chatbot, or an HR tool, you are almost certainly not in prohibited territory. The prohibited list targets specific high-harm practices that have no legitimate commercial use case that would justify their deployment.
The areas where companies occasionally fall into grey territory:
- Emotion recognition in HR products — tools that claim to assess candidate "enthusiasm" or employee "engagement" through facial analysis or voice tone during video calls. This is prohibited if deployed in a workplace context.
- Behavioural targeting using vulnerability exploitation — personalisation algorithms specifically designed to target users during moments of financial stress or emotional vulnerability for commercial gain.
- Facial recognition features in B2B products — if your product enables customers to deploy real-time facial recognition in public spaces, you may be enabling a prohibited use.
Penalties for prohibited practices
Violations of Article 5 carry the highest penalties in the EU AI Act: up to €35 million or 7% of total worldwide annual turnover — whichever is higher. These are not proportionate-to-company-size penalties in the same way as lower-tier violations. The EU has signalled that prohibited practice enforcement will be a priority.
Check your classification
If you are uncertain whether any feature in your product touches the prohibited list, the first step is a formal classification. ActReady's free classifier walks through the prohibited practices explicitly as part of the classification flow — use it at getactready.com/classify.
Stay ahead of the deadline
Get EU AI Act updates, enforcement news, and compliance guides delivered to your inbox. No spam — unsubscribe any time.
Check your AI system's risk level for free
Our classifier maps your AI system against the EU AI Act in under 60 seconds. No signup required.
Classify Your AI System