EU AI Act Deployer Obligations: What You're Responsible for When You Use Someone Else's AI
The EU AI Act makes a fundamental distinction between two types of organisations: providers, who develop and place AI systems on the market, and deployers, who use those systems in their own products or operations. Most regulatory coverage focuses on providers. But if you use third-party AI tools in your business, you are a deployer — and you have binding obligations too.
This matters more than most companies realise. Using Salesforce's AI features, embedding OpenAI in your product, deploying an AI hiring tool, or running AI-powered customer scoring all make you a deployer under the Act. Your compliance does not depend on the vendor — it depends on what you do with their system.
Who counts as a deployer
Under Article 3(4), a deployer is any natural or legal person who uses an AI system under their own authority in the course of a professional activity. This includes:
- A company that uses an AI hiring tool to screen candidates
- A bank that deploys a third-party credit scoring model
- A SaaS product that embeds OpenAI or Claude to generate outputs for end users
- An HR team using performance monitoring AI on employees
- A healthcare provider using a diagnostic AI tool
If you are using someone else's AI system in your business or product, you are almost certainly a deployer.
The core deployer obligations for high-risk AI
Deployer obligations only apply in full when the underlying AI system is classified as high-risk under Annex III. If the AI you use is limited or minimal risk, your obligations are narrower (primarily transparency).
For high-risk AI systems, deployers must:
1. Use the system as intended
You are required to deploy the AI system only within the scope of its intended purpose as defined by the provider. Using an AI system outside its documented intended use shifts more liability to you as the deployer.
2. Implement human oversight
Even if the provider has built human oversight capabilities into the system, you must ensure they are actually used. Article 14 requires deployers to assign oversight responsibilities to qualified individuals who have the authority to intervene, override, or stop the system.
3. Monitor performance
You must monitor the AI system's behaviour in your specific deployment context. This includes watching for outputs that seem inaccurate, biased, or inconsistent with expectations — and reporting serious incidents to the provider and relevant authorities.
4. Maintain logs
Where technically possible, you must retain logs of the system's operation to the extent needed to demonstrate compliance. The provider may generate system-level logs, but you are responsible for deployment-level records.
5. Conduct a Fundamental Rights Impact Assessment
For certain deployers — particularly public bodies and organisations deploying AI that affects large numbers of people — a Fundamental Rights Impact Assessment (FRIA) is required before deployment. This assesses the potential impact on equality, privacy, and non-discrimination.
6. Inform affected individuals
Where natural persons are subject to decisions or significant influences from a high-risk AI system, you must inform them that AI is being used. This applies to employees subject to AI monitoring, applicants screened by AI, and customers subject to AI-driven decisions.
What your provider is — and isn't — responsible for
Your provider is responsible for building a compliant system: conducting conformity assessments, maintaining Annex IV technical documentation, registering in the EU AI database, and ensuring the system meets accuracy and robustness requirements.
What they are not responsible for is your deployment. Their CE marking or conformity declaration covers the system itself. It does not cover how you use it, who you use it on, or whether you have implemented human oversight correctly in your context.
Before the deadline, ask your AI vendors for their EU AI Act classification and compliance documentation. Any provider of a high-risk system should be able to provide this on request.
What about GPAI models like ChatGPT or Claude?
General-purpose AI model providers (OpenAI, Anthropic, Google, Meta) have their own separate obligations under Chapter V of the Act. As a deployer building on these models, you have a separate set of transparency and due diligence obligations on top of any Annex III high-risk obligations from your specific use case.
These stack. A company building a credit scoring tool on GPT-4 is both a high-risk AI deployer under Annex III and a GPAI model deployer under Chapter V.
Steps to take before August 2
- List every third-party AI tool your business uses in any customer-facing or employee-facing context
- Classify each one — free classifier at getactready.com/classify will tell you if it triggers high-risk obligations
- Contact providers for their EU AI Act compliance documentation
- Document your human oversight processes for each high-risk deployment
- Review employee and customer communications to confirm AI use is disclosed
- Assign responsibility for ongoing monitoring to a named person or team
Stay ahead of the deadline
Get EU AI Act updates, enforcement news, and compliance guides delivered to your inbox. No spam — unsubscribe any time.
Check your AI system's risk level for free
Our classifier maps your AI system against the EU AI Act in under 60 seconds. No signup required.
Classify Your AI System