EU AI Act and HR Tools: What Employment AI Must Comply With
If your HR software uses AI to screen CVs, rank candidates, assess performance, or allocate tasks, you are almost certainly dealing with a high-risk AI system under the EU AI Act. This is one of the most common scenarios for SaaS companies — and one of the most overlooked.
Why HR AI is high-risk
Annex III, Category 4 of the EU AI Act specifically covers AI systems used for recruitment, selection, hiring decisions, task allocation, performance monitoring, and termination. The rationale is clear: these systems directly affect people's livelihoods. A flawed AI that systematically screens out qualified candidates or unfairly evaluates workers can cause real harm at scale.
What this means for your product
If your HR tool does any of the above, you need:
- Technical documentation (Annex IV). A complete technical file covering your system's design, training data, validation process, and limitations. This must be ready before you sell to EU customers.
- Risk management system (Article 9). A documented process for identifying, assessing, and mitigating risks — including bias in candidate screening, discrimination, and system errors.
- Data governance (Article 10). Training data must be relevant, representative, and free from discriminatory bias. You need to document where your training data came from, how it was cleaned, and how you checked for bias.
- Human oversight (Article 14). HR AI cannot make final decisions autonomously. There must be a human in the loop who can review, override, or reject the AI's outputs. Your product UI needs to make this possible.
- Transparency to workers (Article 13). People subject to AI assessment must be informed that AI is being used — including candidates being screened and employees being monitored.
- Accuracy and robustness (Article 15). You need to validate that your system performs accurately across different demographic groups. Disparate impact testing is expected.
- EU database registration (Article 49). High-risk AI systems must be registered in the EU AI database before being placed on the market.
Common questions from HR SaaS companies
"We just do CV parsing, not decision-making."
CV parsing that ranks or filters candidates is within scope. The regulation covers systems that support decisions, not just those that make them autonomously.
"Our customers make the final decision."
True — but you, as the provider, still have obligations. Article 25 clarifies that deployer obligations don't remove provider obligations. You need compliant systems; your customers need compliant deployment practices.
"We're not based in the EU."
Doesn't matter. If your system affects people in the EU, the regulation applies. This includes US and UK companies selling to EU businesses.
"We use a third-party AI model."
You're still the provider if you build the HR application. You need to understand and document the model you're using, including its limitations and potential biases.
Steps to get compliant
- Classify your system — confirm it's high-risk using ActReady's free classifier
- Audit your training data — document sources, check for bias
- Generate your technical documentation — Annex IV covers all nine required sections
- Build in human oversight — if your product doesn't have review/override functionality, add it
- Add transparency disclosures — update your UX to inform users when AI is being applied
- Register in the EU database — required before market placement
The August 2, 2026 deadline applies. If you're actively selling HR AI tools to EU customers and haven't started compliance work, now is the time.
Check your AI system's risk level for free
Our classifier maps your AI system against the EU AI Act in under 60 seconds. No signup required.
Classify Your AI System